Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Control Plane Implementation with IPv6 Link-Local IPv6 Underlay and IPv6 Overlay Example

Consider the example depicted in Figure 42.

For the underlay, STRIPE1 LEAF 1 in AS 201 automatically establishes an EBGP session with SPINE 1 in AS 101, over the directly connected link FE80::1 <=> FE80::2. Similarly, STRIPE2 LEAF 1 in AS 209 establishes an EBGP session with SPINE 1 over the link FE80::1 <=> FE80::2. These addresses are the link local addresses automatically assigned to the interfaces based on their MAC address, (shown here as FE80::1 and FE80::2 for simplicity), and are auto discovered by the BGP peers using standard IPv6 neighbor discover mechanisms.

Figure 42: IPv6 Link-Local Underlay and IPv6 Overlay Example

The underlay BGP sessions are configured to exchange IPv6 unicast routes and to advertise the addresses of the loopback interfaces (lo0.0) of STRIPE1 LEAF 1 (FC00:10::1:1), STRIPE2 LEAF 1 (FC00:10::1:9) and SPINE 1 (FC00:10::1). As a result, the leaf and the spine nodes have reachability to establish the EBGP overlay sessions. Once the overlay sessions are establish the leaf nodes, acting as VTEP, advertise the links facing the GPU servers as EVPN type 5 routes.

Note:

Although it is not shown in the diagram, STRIPE1 LEAF 1 and STRIPE2 LEAF 1 will also establish EBGP sessions with SPINE 2, SPINE 3, and SPINE 4 to ensure multiple paths are available for traffic.

STRIPE1 LEAF 1 advertises the links connecting SERVER 1 GPU1 and SERVER 2 GPU1 (FC00:1:1:1::/64 and FC00:1:1:2::/64 respectively) to the spine nodes, which then advertise the routes to STRIPE2 LEAF 1. Similarly, STRIPE2 LEAF 1 advertises the links connecting SERVER 3 GPU1 and SERVER 4 GPU1 (FC00:1:1:3::/64 and FC00:1:1:4/64 respectively).

The spines are configured to maintain the next hop when advertising the routes received from the leaf nodes to other leaf nodes (no-nexthop-change). This allows VXLAN tunnels to be established between the leaf nodes, and not between the leaf and spine nodes.

Figure 43. Spine Route Readvertisements

Because all four GPUs in the example belong to the same tenant, their associated interfaces are mapped to the same VRF, RT5-IP-VRF_TENANT-1 which is configured on both STRIPE1 LEAF 1 and STRIPE2 LEAF 1 with the same VXLAN Network Identifier (VNI) and route targets.

STRIPE1 LEAF 1 advertises the prefixes FC00:1:1:1::/64 and FC00:1:12::/64 to SPINE 1 as EVPN Route Type 5, with its own loopback (FC00:10:1::1) as the next-hop VTEP. STRIPE2 LEAF 1 advertises FC00:1:1:3::/64 and FC00:1:1:4::/64 with its own loopback (FC00:10:1::9) as the next-hop.

When SERVER 1 GPU1 sends traffic to SERVER 3 GPU1, the destination addresses match the route FC00:1:1:4::/64 found in the VRF routing table on STRIPE1 LEAF 1 (Tenant-1 .inet.6). The route points to STRIPE2 LEAF 1 (VTEP FC00:10:1::9) as the protocol next-hop (which is resolved to the link local addresses of the spine nodes). The route also specifies VNI 1 as the VXLAN encapsulation ID. The packet is encapsulated with the VXLAN header and tunneled across the fabric to its destination.