ON THIS PAGE
How to Integrate the Super Spine Devices into the IP Fabric Underlay Network
How to Integrate the Super Spine Devices into the EVPN Overlay Network
How to Verify That the Super Spine Devices Are Integrated Into the Underlay and Overlay Networks
How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs
How to Verify the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs
Five-Stage IP Fabric Design and Implementation
To enable you to scale your existing EVPN-VXLAN network in a data center, Juniper Networks supports a 5-stage IP fabric. Although a 5-stage IP fabric is actually comprised of 3 tiers of networking devices, the term 5-stage refers to the number of network devices that traffic sent from one host to another must traverse to reach its destination.
Juniper Networks supports a 5-stage IP fabric in an inter-point of delivery (POD) connectivity use case within a data center. This use case assumes that your EVPN-VXLAN network already includes tiers of spine and leaf devices in two PODs. To enable connectivity between the two PODs, you add a tier of super spine devices. To determine which Juniper Networks devices you can use as a super spine device, see the Data Center Fabric Reference Design Supported Hardware Summary table.
Figure 1 shows the 5-stage IP fabric that we use in this reference design.

As shown in Figure 1, each super spine device is connected to each spine device in each POD.
We support the following network overlay type combinations in each POD:
The EVPN-VXLAN fabric in both PODs has a centrally routed bridging overlay.
The EVPN-VXLAN fabric in both PODs has an edge-routed bridging overlay.
The EVPN-VXLAN fabric in one POD has a centrally routed bridging overlay, and the fabric in the other POD has an edge-routed bridging overlay.
Juniper Network’s 5-stage IP fabric supports RFC 7938, Use of BGP for Routing in Large-Scale Data Centers. However, where appropriate, we use terminology that more effectively describes our use case.
Note the following about the 5-stage IP fabric reference design:
This reference design assumes that the tiers of spine and leaf devices in the two PODs already exist and are up and running. As a result, except when describing how to configure the advertisement of EVPN type-5 routes, this topic provides the configuration for the super spine devices only. For information about configuring the spine and leaf devices in the two PODs, see the following:
The reference design integrates Super Spines 1 and 2 into existing IP fabric underlay and EVPN overlay networks.
The super spine devices have the following functions:
They act as IP transit devices only.
They serve as route reflectors for Spines 1 through 4.
When configuring the routing protocol in the EVPN overlay network, you can use either IBGP or EBGP. Typically, you use IBGP if your data center uses the same autonomous system (AS) number throughout and EBGP if your data center uses different AS numbers throughout. This reference design uses the IBGP configuration option. For information about the EBGP configuration option, see Over-the-Top Data Center Interconnect in an EVPN Network.
After you integrate Super Spines 1 and 2 into existing IP fabric underlay and EVPN overlay networks and verify the configuration, the super spine devices will handle the communication between PODs 1 and 2 by advertising EVPN type-2 routes. This method will work if your PODs use the same IP address subnet scheme. However, if servers connected to the leaf devices in each POD are in different subnets, you must configure the devices that handle inter-subnet routing in the PODs to advertise EVPN type-5 routes. For more information, see How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs later in this topic.
How to Integrate the Super Spine Devices into the IP Fabric Underlay Network
This section shows you how to configure the super spine devices so that they can communicate with the spine devices, which are already configured as part of an existing IP fabric underlay network.
For details about the interfaces and autonomous systems (ASs) in the IP fabric underlay network, see Figure 2.

- Configure the interfaces that connect the super spine
devices to Spines 1 through 4.
For the connection to each spine device, we create an aggregated Ethernet interface that currently includes a single link. We use this approach in case you need to increase the throughput to each spine device at a later time.
For interface details for the super spine devices, see Figure 2.
Super Spine 1
set interfaces et-0/0/1 ether-options 802.3ad ae1set interfaces ae1 unit 0 family inet address 172.16.101.0/31set interfaces et-0/0/2 ether-options 802.3ad ae2set interfaces ae2 unit 0 family inet address 172.16.102.0/31set interfaces et-0/0/3 ether-options 802.3ad ae3set interfaces ae3 unit 0 family inet address 172.16.103.0/31set interfaces et-0/0/4 ether-options 802.3ad ae4set interfaces ae4 unit 0 family inet address 172.16.104.0/31Super Spine 2
set interfaces et-0/0/1 ether-options 802.3ad ae1set interfaces ae1 unit 0 family inet address 172.16.101.2/31set interfaces et-0/0/2 ether-options 802.3ad ae2set interfaces ae2 unit 0 family inet address 172.16.102.2/31set interfaces et-0/0/3 ether-options 802.3ad ae3set interfaces ae3 unit 0 family inet address 172.16.103.2/31set interfaces et-0/0/4 ether-options 802.3ad ae4set interfaces ae4 unit 0 family inet address 172.16.104.2/31 - Specify an IP address for loopback interface lo0.0.
We use the loopback address for each super spine device when setting up an export routing policy later in this procedure.
Super Spine 1
set interfaces lo0 unit 0 family inet address 192.168.2.1/32Super Spine 2
set interfaces lo0 unit 0 family inet address 192.168.2.2/32 - Configure the router ID.
We use the router ID for each super spine device when setting up the route reflector cluster in the EVPN overlay network.
Super Spine 1
set routing-options router-id 192.168.2.1Super Spine 2
set routing-options router-id 192.168.2.2 - Create a BGP peer group named underlay-bgp,
and enable EBGP as the routing protocol in the underlay network.
Super Spines 1 and 2
set protocols bgp group underlay-bgp type external - Configure the AS number.
In this reference design, each device is assigned a unique AS number in the underlay network. For the AS numbers of the super spine devices, see Figure 2.
The AS number for EBGP in the underlay network is configured at the BGP peer group level using the local-as statement because the system AS number setting is used for MP-IBGP signaling in the EVPN overlay network.
Super Spine 1
set protocols bgp group underlay-bgp local-as 4200000021Super Spine 2
set protocols bgp group underlay-bgp local-as 4200000022 - Set up a BGP peer relationship with Spines 1 through 4.
To establish the peer relationship, on each super spine device, configure each spine device as a neighbor by specifying the spine device’s IP address and AS number. For the IP addresses and AS numbers of the spine devices, see Figure 2.
Super Spine 1
set protocols bgp group underlay-bgp neighbor 172.16.101.1 peer-as 4200000001set protocols bgp group underlay-bgp neighbor 172.16.102.1 peer-as 4200000002set protocols bgp group underlay-bgp neighbor 172.16.103.1 peer-as 4200000003set protocols bgp group underlay-bgp neighbor 172.16.104.1 peer-as 4200000004Super Spine 2
set protocols bgp group underlay-bgp neighbor 172.16.101.3 peer-as 4200000001set protocols bgp group underlay-bgp neighbor 172.16.102.3 peer-as 4200000002set protocols bgp group underlay-bgp neighbor 172.16.103.3 peer-as 4200000003set protocols bgp group underlay-bgp neighbor 172.16.104.3 peer-as 4200000004 - Configure an export routing policy that advertises the
IP address of loopback interface lo0.0 on the super spine devices
to the EBGP peering devices (Spines 1 through 4). This policy rejects
all other advertisements.
Super Spines 1 and 2
set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0set policy-options policy-statement underlay-clos-export term loopback then acceptset policy-options policy-statement underlay-clos-export term def then rejectset protocols bgp group underlay-bgp export underlay-clos-export - Enable multipath with the multiple-as option,
which enables load balancing between EBGP peers in different ASs.
EBGP, by default, selects one best path for each prefix and installs that route in the forwarding table. When BGP multipath is enabled, all equal-cost paths to a given destination are installed into the forwarding table.
Super Spines 1 and 2
set protocols bgp group underlay-bgp multipath multiple-as - Enable Bidirectional Forwarding Detection (BFD) for all
BGP sessions to enable the rapid detection of failures and reconvergence.
Super Spines 1 and 2
set protocols bgp group underlay-bgp bfd-liveness-detection minimum-interval 1000set protocols bgp group underlay-bgp bfd-liveness-detection multiplier 3set protocols bgp group underlay-bgp bfd-liveness-detection session-mode automatic
How to Integrate the Super Spine Devices into the EVPN Overlay Network
This section explains how to integrate the super spine devices into the EVPN overlay network. In this control-plane driven overlay, we establish a signalling path between all devices within a single AS using IBGP with Multiprotocol BGP (MP-IBGP).
In this IBGP overlay, the super spine devices act as a route reflector cluster, and the spine devices are route reflector clients. For details about the route reflector cluster ID and BGP neighbor IP addresses in the EVPN overlay network, see Figure 3.

- Configure an AS number for the IBGP overlay.
All devices participating in this overlay (Super Spines 1 and 2, Spines 1 through 4, Leafs 1 through 4) must use the same AS number. In this example, the AS number is private AS 4210000001.
Super Spines 1 and 2
set routing-options autonomous-system 4210000001 - Configure IBGP using EVPN signaling to peer with Spines
1 through 4. Also, form the route reflector cluster (cluster ID 192.168.2.10),
and configure equal cost multipath (ECMP) for BGP. Enable path maximum
transmission unit (MTU) discovery to dynamically determine the MTU
size on the network path between the source and the destination, with
the goal of avoiding IP fragmentation.
For details about the route reflector cluster ID and BGP neighbor IP addresses for super spine and spine devices, see Figure 3.
Super Spine 1
set protocols bgp group overlay-bgp type internalset protocols bgp group overlay-bgp local-address 192.168.2.1set protocols bgp group overlay-bgp mtu-discoveryset protocols bgp group overlay-bgp family evpn signalingset protocols bgp group overlay-bgp cluster 192.168.2.10set protocols bgp group overlay-bgp multipathset protocols bgp group overlay-bgp neighbor 192.168.0.1set protocols bgp group overlay-bgp neighbor 192.168.0.2set protocols bgp group overlay-bgp neighbor 192.168.0.3set protocols bgp group overlay-bgp neighbor 192.168.0.4Super Spine 2
set protocols bgp group overlay-bgp type internalset protocols bgp group overlay-bgp local-address 192.168.2.2set protocols bgp group overlay-bgp mtu-discoveryset protocols bgp group overlay-bgp family evpn signalingset protocols bgp group overlay-bgp cluster 192.168.2.10set protocols bgp group overlay-bgp multipathset protocols bgp group overlay-bgp neighbor 192.168.0.1set protocols bgp group overlay-bgp neighbor 192.168.0.2set protocols bgp group overlay-bgp neighbor 192.168.0.3set protocols bgp group overlay-bgp neighbor 192.168.0.4Note This reference design does not include the configuration of BGP peering between Super Spines 1 and 2. However, if you want to set up this peering to complete the full mesh peering topology, you can optionally do so by creating another BGP group and specifying the configuration in that group. For example:
Super Spine 1
set protocols bgp group overlay-bgp2 type internalset protocols bgp group overlay-bgp2 local-address 192.168.2.1set protocols bgp group overlay-bgp2 family evpn signalingset protocols bgp group overlay-bgp2 neighbor 192.168.2.2Super Spine 2
set protocols bgp group overlay-bgp2 type internalset protocols bgp group overlay-bgp2 local-address 192.168.2.2set protocols bgp group overlay-bgp2 family evpn signalingset protocols bgp group overlay-bgp2 neighbor 192.168.2.1 - Enable BFD for all BGP sessions to enable rapid detection
of failures and reconvergence.
Super Spines 1 and 2
set protocols bgp group overlay-bgp bfd-liveness-detection minimum-interval 1000set protocols bgp group overlay-bgp bfd-liveness-detection multiplier 3set protocols bgp group overlay-bgp bfd-liveness-detection session-mode automatic
How to Verify That the Super Spine Devices Are Integrated Into the Underlay and Overlay Networks
This section explains how you can verify that the super spine devices are properly integrated into the IP fabric underlay and EVPN overlay networks.
After you successfully complete this verification, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-2 routes. This method will work if your PODs use the same IP address subnet scheme. However, if each POD uses a different IP address subnet scheme, you must additionally configure the devices that handle inter-subnet routing in the PODs to advertise EVPN type-5 routes. For more information, see How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs later in this topic.
- Verify that the aggregated Ethernet interfaces are enabled,
that the physical links are up, and that packets are being transmitted
if traffic has been sent.
The output below provides this verification for aggregated Ethernet interface ae1 on Super Spine 1.
user@super-spine-1> show interfaces ae1
Physical interface: ae1, Enabled, Physical link is Up Interface index: 129, SNMP ifIndex: 544 Link-level type: Ethernet, MTU: 9192, Speed: 80Gbps, BPDU Error: None, Ethernet-Switching Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled, Minimum links needed: 1, Minimum bandwidth needed: 1bps Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Current address: 80:ac:ac:24:21:98, Hardware address: 80:ac:ac:24:21:98 Last flapped : 2020-07-30 13:09:31 PDT (3d 05:01 ago) Input rate : 42963216 bps (30206 pps) Output rate : 107152 bps (76 pps) Logical interface ae1.0 (Index 544) (SNMP ifIndex 564) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Statistics Packets pps Bytes bps Bundle: Input : 7423834047 30126 1155962320326 37535088 Output: 149534343 82 17315939427 83824 Adaptive Statistics: Adaptive Adjusts: 0 Adaptive Scans : 0 Adaptive Updates: 0 Protocol inet, MTU: 9000 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re, Is-Primary, User-MTU Addresses, Flags: Is-Preferred Is-Primary Destination: 172.16.101.0/31, Local: 172.16.101.0
- Verify that the BGP is up and running.
The output below verifies that EBGP and IBGP peer relationships with Spines 1 through 4 are established and that traffic paths are active.
user@super-spine-1> show bgp summary
Threading mode: BGP I/O Groups: 2 Peers: 8 Down peers: 0 Table Tot Paths Act Paths Suppressed History Damp State Pending bgp.evpn.0 219394 210148 0 0 0 0 inet.0 55 27 0 0 0 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped... 172.16.101.1 4200000001 9452 10053 0 2 3d 5:01:58 Establ inet.0: 6/14/14/0 172.16.102.1 4200000002 9462 10061 0 3 3d 4:58:50 Establ inet.0: 7/14/14/0 172.16.103.1 4200000003 9244 9828 0 5 3d 3:14:54 Establ inet.0: 7/14/14/0 172.16.104.1 4200000004 9457 10057 0 1 3d 4:58:35 Establ inet.0: 7/13/13/0 192.168.0.1 4210000001 29707 436404 0 2 3d 5:01:49 Establ bgp.evpn.0: 16897/16897/16897/0 192.168.0.2 4210000001 946949 237127 0 3 3d 4:58:51 Establ bgp.evpn.0: 50844/55440/55440/0 192.168.0.3 4210000001 40107 350304 0 7 3d 3:13:54 Establ bgp.evpn.0: 50723/55373/55373/0 192.168.0.4 4210000001 50670 274946 0 1 3d 4:58:32 Establ bgp.evpn.0: 91684/91684/91684/0
- Verify that BFD is working.
The output below verifies that BGP sessions between Super Spine 1 and Spines 1 through 4 are established and in the Up state.
user@super-spine-1> show bfd session
Detect Transmit Address State Interface Time Interval Multiplier 172.16.101.1 Up ae1.0 3.000 1.000 3 172.16.102.1 Up ae2.0 3.000 1.000 3 172.16.103.1 Up ae3.0 3.000 1.000 3 172.16.104.1 Up ae4.0 3.000 1.000 3 192.168.0.1 Up 3.000 1.000 3 192.168.0.2 Up 3.000 1.000 3 192.168.0.3 Up 3.000 1.000 3 192.168.0.4 Up 3.000 1.000 3
How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs
After you complete the tasks in the following sections, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-2 routes.
How to Integrate the Super Spine Devices into the IP Fabric Underlay Network
How to Integrate the Super Spine Devices into the EVPN Overlay Network
How to Verify the Integration of the Super Spine Devices Into the Underlay and Overlay Networks
If servers connected to the leaf devices in both PODs are in the same subnet, you can skip the task in this section. However, if servers in each POD are in different subnets, you must further configure the devices that handle inter-subnet routing in the PODs to advertise EVPN type-5 routes as described in this section. This type of route is also known as an IP prefix route.
In this EVPN type-5 reference design, the EVPN-VXLAN fabric in both PODs has a centrally routed bridging overlay. In this type of overlay, the spine devices handle inter-subnet routing. Therefore, this section explains how to enable the advertisement of EVPN type-5 routes on the spine devices in PODs 1 and 2.
To enable the advertisement of EVPN type-5 routes, you set up a tenant routing instance named VRF-1 on each spine device. In the routing instance, you specify which host IP addresses and prefixes that you want a spine device to advertise as EVPN type-5 routes with a VXLAN network identifier (VNI) of 500001. A spine device will advertise the EVPN type-5 routes to the other spine and leaf devices within the same POD. The spine device will also advertise the EVPN type-5 routes to the super spine devices, which will in turn advertise the routes to the spine devices in the other POD. All spine devices on which you have configured VRF-1 will import the EVPN type-5 routes into their VRF-1 routing table.
After you enable the advertisement of EVPN type-5 routes, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-5 routes.
Figure 4 shows the EVPN type-5 configuration details for the inter-POD use case.

Table 1 outlines the VLAN ID to IRB interface mappings for this reference design.
Table 1: VLAN ID to IRB Interface Mappings
VLAN Names | VLAN IDs | IRB Interface |
---|---|---|
Spines 1 and 2 in POD 1 | ||
VLAN BD-1 | 1 | irb.1 |
VLAN BD-2 | 2 | irb.2 |
Spines 3 and 4 in POD 2 | ||
VLAN BD-3 | 3 | irb.3 |
VLAN BD-4 | 4 | irb.4 |
To set up the advertisement of EVPN type-5 routes:
- Create loopback interface lo0.1, and specify that is in
the IPv4 address family.
For example:
Spine 1
set interfaces lo0 unit 1 family inet - Configure a routing instance of type vrf named
VRF-1. In this routing instance, include loopback interface lo0.1
so that the spine device, which acts as a VXLAN gateway, can resolve
ARP requests and the IRB interfaces that correspond to each spine
device (see Table 1). Set a route
distinguisher and VRF targets for the routing instance. Configure
load balancing for EVPN type-5 routes with the multipath ECMP option.
For example:
Spine 1
set routing-instances VRF-1 instance-type vrfset routing-instances VRF-1 interface lo0.1set routing-instances VRF-1 interface irb.1set routing-instances VRF-1 interface irb.2set routing-instances VRF-1 route-distinguisher 192.168.0.1:1set routing-instances VRF-1 vrf-target import target:200:1set routing-instances VRF-1 vrf-target export target:100:1set routing-instances VRF-1 routing-options rib VRF-1.inet6.0 multipathset routing-instances VRF-1 routing-options multipathSpine 2
set routing-instances VRF-1 instance-type vrfset routing-instances VRF-1 interface lo0.1set routing-instances VRF-1 interface irb.1set routing-instances VRF-1 interface irb.2set routing-instances VRF-1 route-distinguisher 192.168.0.2:1set routing-instances VRF-1 vrf-target import target:200:1set routing-instances VRF-1 vrf-target export target:100:1set routing-instances VRF-1 routing-options rib VRF-1.inet6.0 multipathset routing-instances VRF-1 routing-options multipathSpine 3
set routing-instances VRF-1 instance-type vrfset routing-instances VRF-1 interface lo0.1set routing-instances VRF-1 interface irb.3set routing-instances VRF-1 interface irb.4set routing-instances VRF-1 route-distinguisher 192.168.0.3:1set routing-instances VRF-1 vrf-target import target:100:1set routing-instances VRF-1 vrf-target export target:200:1set routing-instances VRF-1 routing-options rib VRF-1.inet6.0 multipathset routing-instances VRF-1 routing-options multipathSpine 4
set routing-instances VRF-1 instance-type vrfset routing-instances VRF-1 interface lo0.1set routing-instances VRF-1 interface irb.3set routing-instances VRF-1 interface irb.4set routing-instances VRF-1 route-distinguisher 192.168.0.4:1set routing-instances VRF-1 vrf-target import target:100:1set routing-instances VRF-1 vrf-target export target:200:1set routing-instances VRF-1 routing-options rib VRF-1.inet6.0 multipathset routing-instances VRF-1 routing-options multipath - Enable EVPN to advertise direct next hops, specify VXLAN
encapsulation, and assign VNI 500001 to the EVPN type-5 routes.
For the configuration of Spines 1 through 4, use VNI 500001 in this configuration.
For example:
Spine 1
set routing-instances VRF-1 protocols evpn ip-prefix-routes advertise direct-nexthopset routing-instances VRF-1 protocols evpn ip-prefix-routes encapsulation vxlanset routing-instances VRF-1 protocols evpn ip-prefix-routes vni 500001 - Define an EVPN type-5 export policy named ExportHostRoutes
for tenant routing instance VRF-1.
For example, the following configuration establishes that VRF-1 advertises all host IPv4 and IPv6 addresses and prefixes learned by EVPN and from networks directly connected to Spine 1.
Spine 1
set policy-options policy-statement ExportHostRoutes term 1 from protocol evpnset policy-options policy-statement ExportHostRoutes term 1 from route-filter 0.0.0.0/0 prefix-length-range /32-/32set policy-options policy-statement ExportHostRoutes term 1 then acceptset policy-options policy-statement ExportHostRoutes term 2 from family inet6set policy-options policy-statement ExportHostRoutes term 2 from protocol evpnset policy-options policy-statement ExportHostRoutes term 2 from route-filter 0::0/0 prefix-length-range /128-/128set policy-options policy-statement ExportHostRoutes term 2 then acceptset policy-options policy-statement ExportHostRoutes term 3 from protocol directset policy-options policy-statement ExportHostRoutes term 3 then accept - Apply the export policy named ExportHostRoutes to VRF-1.
For example:
Spine 1
set routing-instances VRF-1 protocols evpn ip-prefix-routes export ExportHostRoutes - In this reference design, QFX5120-32C switches act as
spine devices. For these switches and all other QFX5XXX switches that act as spine devices in a centrally routed bridging
overlay, you must perform the following additional configuration to
properly implement EVPN pure type-5 routing.
Spines 1 through 4
set routing-options forwarding-table chained-composite-next-hop ingress evpnset forwarding-options vxlan-routing next-hop 32768set forwarding-options vxlan-routing interface-num 8192set forwarding-options vxlan-routing overlay-ecmp
How to Verify the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs
To verify that the spine devices in this reference design are properly advertising EVPN type-5 routes:
- View the VRF route table to verify that the end system
routes and spine device routes are being exchanged.
The following snippet of output shows IPv4 routes only.
Spine 1
user@Spine-1> show route table VRF-1
VRF-1.inet.0: 53 destinations, 93 routes (53 active, 0 holddown, 0 hidden) @ = Routing Use Only, # = Forwarding Use Only + = Active Route, - = Last Active, * = Both 10.0.1.0/24 *[Direct/0] 1d 00:19:49 > via irb.1 10.0.1.1/32 *[EVPN/7] 00:00:56 > via irb.1 10.0.1.241/32 *[Local/0] 1d 00:19:49 Local via irb.1 10.0.1.254/32 *[Local/0] 1d 00:19:49 Local via irb.1 10.0.2.0/24 *[Direct/0] 1d 00:19:49 > via irb.2 10.0.2.1/32 *[EVPN/7] 00:00:56 > via irb.2 10.0.2.241/32 *[Local/0] 1d 00:19:49 Local via irb.2 10.0.2.254/32 *[Local/0] 1d 00:19:49 Local via irb.2 10.0.3.0/24 @[EVPN/170] 1d 00:17:54 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 [EVPN/170] 20:53:20 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 #[Multipath/255] 20:53:20, metric2 0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 10.0.3.1/32 @[EVPN/170] 00:00:26 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 [EVPN/170] 00:00:26 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 #[Multipath/255] 00:00:26, metric2 0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 10.0.4.0/24 @[EVPN/170] 1d 00:17:54 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 [EVPN/170] 20:53:20 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 #[Multipath/255] 20:53:20, metric2 0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 10.0.4.1/32 @[EVPN/170] 00:00:26 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 [EVPN/170] 00:00:26 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 #[Multipath/255] 00:00:26, metric2 0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0 to 172.16.101.0 via ae3.0 > to 172.16.101.2 via ae4.0
...Spine 3
user@Spine-3> show route table VRF-1
10.0.1.0/24 @[EVPN/170] 1d 00:17:54 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 [EVPN/170] 20:53:20 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 #[Multipath/255] 20:53:20, metric2 0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 10.0.1.1/32 @[EVPN/170] 00:00:26 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 [EVPN/170] 00:00:26 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 #[Multipath/255] 00:00:26, metric2 0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 10.0.2.0/24 @[EVPN/170] 1d 00:17:54 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 [EVPN/170] 20:53:20 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 #[Multipath/255] 20:53:20, metric2 0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 10.0.2.1/32 @[EVPN/170] 00:00:26 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 [EVPN/170] 00:00:26 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 #[Multipath/255] 00:00:26, metric2 0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 to 172.16.103.0 via ae3.0 > to 172.16.103.2 via ae4.0 10.0.3.0/24 *[Direct/0] 1d 00:19:49 > via irb.3 10.0.3.1/32 *[EVPN/7] 00:00:56 > via irb.3 10.0.3.241/32 *[Local/0] 1d 00:19:49 Local via irb.3 10.0.3.254/32 *[Local/0] 1d 00:19:49 Local via irb.3 10.0.4.0/24 *[Direct/0] 1d 00:19:49 > via irb.4 10.0.4.1/32 *[EVPN/7] 00:00:56 > via irb.4 10.0.4.241/32 *[Local/0] 1d 00:19:49 Local via irb.4 10.0.4.254/32 *[Local/0] 1d 00:19:49 Local via irb.4
... - Verify that EVPN type-5 IPv4 and IPv6 routes are exported
and imported into the VRF-1 routing instance.
Spine 1
user@Spine-1> show evpn ip-prefix-database l3-context VRF-1
L3 context: VRF-1 IPv4->EVPN Exported Prefixes Prefix EVPN route status 10.0.1.0/24 Created 10.0.1.1/32 Created 10.0.2.0/24 Created 10.0.2.1/32 Created IPv6->EVPN Exported Prefixes Prefix EVPN route status 2001:db8::10.0.1:0/112 Created 2001:db8::10.0.1:1/128 Created 2001:db8::10.0.2:0/112 Created 2001:db8::10.0.2:1/128 Created EVPN->IPv4 Imported Prefixes Prefix Etag 10.0.3.0/24 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 10.0.3.1/32 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 10.0.4.0/24 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 10.0.4.1/32 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 EVPN->IPv6 Imported Prefixes Prefix Etag 2001:db8::10:0:3:0/112 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 2001:db8::10:0:3:1/128 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 2001:db8::10:0:4:0/112 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4 2001:db8::10:0:4:1/128 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.3:1 500001 00:00:5e:00:53:f2 192.168.0.3 192.168.0.4:1 500001 00:00:5e:00:53:d0 192.168.0.4
Spine 3
user@Spine-3> show evpn ip-prefix-database l3-context VRF-1
L3 context: VRF-1 IPv4->EVPN Exported Prefixes Prefix EVPN route status 10.0.3.0/24 Created 10.0.3.1/32 Created 10.0.4.0/24 Created 10.0.4.1/32 Created IPv6->EVPN Exported Prefixes Prefix EVPN route status 2001:db8::10.0.3:0/112 Created 2001:db8::10.0.3:1/128 Created 2001:db8::10.0.4:0/112 Created 2001:db8::10.0.4:1/128 Created EVPN->IPv4 Imported Prefixes Prefix Etag 10.0.1.0/24 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 10.0.1.1/32 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 10.0.2.0/24 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 10.0.2.1/32 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 EVPN->IPv6 Imported Prefixes Prefix Etag 2001:db8::10:0:1:0/112 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 2001:db8::10:0:1:1/128 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 2001:db8::10:0:2:0/112 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2 2001:db8::10:0:2:1/128 0 Route distinguisher VNI/Label Router MAC Nexthop/Overlay GW/ESI 192.168.0.1:1 500001 00:00:5e:00:53:38 192.168.0.1 192.168.0.2:1 500001 00:00:5e:00:53:29 192.168.0.2
- Verify the EVPN type-5 route encapsulation details. The
following output shows the details for specified prefixes.
Spine 1
user@Spine-1> show route table VRF-1 10.0.4.1 extensive
VRF-1.inet.0: 53 destinations, 93 routes (53 active, 0 holddown, 0 hidden) 10.0.4.1/32 (3 entries, 1 announced) State: CalcForwarding TSI: KRT in-kernel 10.0.4.1/32 -> {list:composite(99398), composite(129244)} @EVPN Preference: 170/-101 Next hop type: Indirect, Next hop index: 0 Address: 0x16197b18 Next-hop reference count: 31 Next hop type: Router, Next hop index: 0 Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.4 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.1, Destination VTEP: 192.168.0.4 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:f2 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 State: Active Int Ext Age: 6:49 Metric2: 0 Validation State: unverified Task: VRF-1-EVPN-L3-context AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.4 Communities: target:200:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:f2 Composite next hops: 1 Protocol next hop: 192.168.0.4 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.1, Destination VTEP: 192.168.0.4 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:f2 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 Indirect path forwarding next hops: 2 Next hop type: Router Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0 Session Id: 0x0 192.168.0.4/32 Originating RIB: inet.0 Node path count: 1 Forwarding nexthops: 2 Next hop type: Router Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0 Session Id: 0x0 EVPN Preference: 170/-101 Next hop type: Indirect, Next hop index: 0 Address: 0x2755af1c Next-hop reference count: 31 Next hop type: Router, Next hop index: 0 Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.3 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.2, Destination VTEP: 192.168.0.3 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:d0 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e State: Int Ext Inactive reason: Nexthop address Age: 6:49 Metric2: 0 Validation State: unverified Task: VRF-1-EVPN-L3-context AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.3 Communities: target:200:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:d0 Composite next hops: 1 Protocol next hop: 192.168.0.3 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.2, Destination VTEP: 192.168.0.3 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:d0 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e Indirect path forwarding next hops: 2 Next hop type: Router Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0 Session Id: 0x0 192.168.0.3/32 Originating RIB: inet.0 Node path count: 1 Forwarding nexthops: 2 Next hop type: Router Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0 Session Id: 0x0 #Multipath Preference: 255 Next hop type: Indirect, Next hop index: 0 Address: 0xe3aa170 Next-hop reference count: 19 Next hop type: Router, Next hop index: 0 Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0 Session Id: 0x0 Next hop type: Router, Next hop index: 0 Next hop: 172.16.101.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.101.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.4 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.2, Destination VTEP: 192.168.0.4 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:f2 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 Protocol next hop: 192.168.0.3 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.2, Destination VTEP: 192.168.0.3 SMAC: 00:00:5e:00:53:38, DMAC: 00:00:5e:00:53:d0 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e State: ForwardingOnly Int Ext Inactive reason: Forwarding use only Age: 6:49 Metric2: 0 Validation State: unverified Task: RT Announcement bits (1): 2-KRT AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.4 Communities: target:200:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:f2
Spine 2
user@Spine-3> show route table VRF-1 10.0.1.1 extensive
VRF-1.inet.0: 53 destinations, 93 routes (53 active, 0 holddown, 0 hidden) 10.0.1.1/32 (3 entries, 1 announced) State: CalcForwarding TSI: KRT in-kernel 10.0.1.1/32 -> {list:composite(99398), composite(129244)} @EVPN Preference: 170/-101 Next hop type: Indirect, Next hop index: 0 Address: 0x16197b18 Next-hop reference count: 31 Next hop type: Router, Next hop index: 0 Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.1 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.1 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:38 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 State: Active Int Ext Age: 6:49 Metric2: 0 Validation State: unverified Task: VRF-1-EVPN-L3-context AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.1 Communities: target:100:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:38 Composite next hops: 1 Protocol next hop: 192.168.0.1 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.1 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:38 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 Indirect path forwarding next hops: 2 Next hop type: Router Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0 Session Id: 0x0 192.168.0.1/32 Originating RIB: inet.0 Node path count: 1 Forwarding nexthops: 2 Next hop type: Router Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0 Session Id: 0x0 EVPN Preference: 170/-101 Next hop type: Indirect, Next hop index: 0 Address: 0x2755af1c Next-hop reference count: 31 Next hop type: Router, Next hop index: 0 Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.2 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.2 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:29 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e State: Int Ext Inactive reason: Nexthop address Age: 6:49 Metric2: 0 Validation State: unverified Task: VRF-1-EVPN-L3-context AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.2 Communities: target:100:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:29 Composite next hops: 1 Protocol next hop: 192.168.0.2 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.2 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:29 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e Indirect path forwarding next hops: 2 Next hop type: Router Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0 Session Id: 0x0 192.168.0.3/32 Originating RIB: inet.0 Node path count: 1 Forwarding nexthops: 2 Next hop type: Router Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0 Session Id: 0x0 #Multipath Preference: 255 Next hop type: Indirect, Next hop index: 0 Address: 0xe3aa170 Next-hop reference count: 19 Next hop type: Router, Next hop index: 0 Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0 Session Id: 0x0 Next hop type: Router, Next hop index: 0 Next hop: 172.16.103.0 via ae3.0 Session Id: 0x0 Next hop: 172.16.103.2 via ae4.0, selected Session Id: 0x0 Protocol next hop: 192.168.0.1 Composite next hop: 0x1b8ed840 99398 INH Session ID: 0x349 VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.1 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:38 Indirect next hop: 0x15bc4284 2101077 INH Session ID: 0x349 Protocol next hop: 192.168.0.2 Composite next hop: 0x2a627e20 129244 INH Session ID: 0x84e VXLAN tunnel rewrite: MTU: 0, Flags: 0x0 Encap table ID: 0, Decap table ID: 1508 Encap VNI: 500001, Decap VNI: 500001 Source VTEP: 192.168.0.3, Destination VTEP: 192.168.0.2 SMAC: 00:00:5e:00:53:f2, DMAC: 00:00:5e:00:53:29 Indirect next hop: 0x15bb6c04 2105498 INH Session ID: 0x84e State: ForwardingOnly Int Ext Inactive reason: Forwarding use only Age: 6:49 Metric2: 0 Validation State: unverified Task: RT Announcement bits (1): 2-KRT AS path: I (Originator) Cluster list: 192.168.2.10 Originator ID: 192.168.0.2 Communities: target:100:1 encapsulation:vxlan(0x8) router-mac:00:00:5e:00:53:29