Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

How to Configure an EVPN-VXLAN Fabric for a Campus Network With CRB

Requirements

This configuration example uses the following devices:

  • Two EX9251 switches as core devices. Software version: Junos OS Release 18.4R2-S4.5.

  • Two EX4600 switches as distribution devices. Software version: Junos OS Release 18.4R2-S4.5.

  • Two EX4300 switches as access layer devices.

  • One SRX Series security device for inter-routing instance traffic inspection.

  • One MX Series router for Internet access.

  • Three hosts to represent servers.

    • Re-validated using Junos OS Release 21.2R.3.

    • See the Feature Explorer for supported platforms.

Overview

Use this NCE to deploy a single campus fabric with a Layer 3 IP-based underlay network that uses EVPN as the control plane protocol and VXLAN as the data plane protocol in the overlay network. In this example, you deploy an centrally-routed bridging (CRB) architecture with a virtual gateway address (VGA). See EVPN-VXLAN Campus Architectures for details on supported EVPN-VXLAN campus architectures.

First, you configure OSPF as the underlay routing protocol to exchange loopback routes. You then configure IBGP between the core and distribution devices in the overlay to share reachability information about endpoints in the fabric.

Topology

In this example, both core devices are configured with a unique IRB unicast address along with the same VGA. Figure 1 shows the physical topology with a SRX Series device, WAN router, and access layer devices. The IP addressing scheme is also shown.

There are three IRB interfaces to coincide with each VLAN (101, 102, and 103). The IRB interfaces are placed in separate routing instances for network segmentation. The SRX Series device enforces policy rules for transit traffic between Servers A and B. Server A can reach the Internet by leaking a default route learned from the WAN router into the routing instance associated with Server A. Servers B and C can reach each other directly by using the auto-export option to copy routes between their respective routing instances.

Figure 1: EVPN-VXLAN Fabric EVPN-VXLAN Fabric
Note:

We use /30 address ranges in this example for readability. If you need to conserve IP address space, consider using /31 addresses.

Also, it's best practice to design the network so the servers can send maximum sized frames without requiring fragmentation. Ethernet does not support fragmentation. Exceeding the core maximum transmission unit (MTU) therefore results in silent discards. To ensure MTU-related drops do not occur, the fabric should support the largest frame that the servers can generate with the added VXLAN encapsulation overhead. This example leaves the servers at the default 1500-byte MTU while configuring the fabric to support a 9000-byte MTU.

Refer to Appendix: Full Device Configurations for the full configuration of all devices used in the example topology.

Configure the Underlay

This section shows how to configure the OSPF IP fabric underlay on the core and distribution layer switches.

Underlay Topology

In this example, we use OSPF as the underlay protocol for loopback reachability. The MX Series router is also part of the underlay configuration. Its configuration is detailed in the Configure Internet Access section of this example.

Figure 2 shows the underlay topology.

Figure 2: Underlay Topology Underlay Topology

Underlay Configuration

Use this section to configure the underlay on the core and distribution layer switches. We only show the step-by-step configuration for two devices: Core 1 and Distribution 1.

Refer to Appendix: Full Device Configurations for the full configuration of all devices used in the example topology.

Core 1 Configuration

  1. Configure the interfaces connected to the distribution layer switches and the MX Series router.

  2. Configure the loopback interface, the router ID, and per-packet load balancing.

  3. Configure the protocol OSPF for the interfaces connected to the distribution layer switches and the MX Series router.

Distribution 1 Configuration

  1. Configure the interfaces connected to the core devices.

  2. Configure the loopback interface, the router ID, and per-packet load balancing.

  3. Configure the protocol OSPF for the interfaces connected to the core switches.

Configure the Overlay

This section shows how to configure the overlay. It includes IBGP peerings, the VLAN to VXLAN mappings, and the IRB interface configurations for the virtual networks (VNs).

Overlay Topology

In this example, there are three VNs: 101, 102 and 103. The IRB interfaces for these VNs are defined on both of the core switches in keeping with a CRB architecture. The IRB interfaces are placed in different routing instance on the core switches for network segmentation. Place the IRB interfaces in the same routing instances if you do not need network segmentation in your deployment.

Figure 3 shows the overlay VN topology.

Figure 3: Overlay Topology Overlay Topology

Overlay and Virtual Network Configuration

Use this section to configure the overlay and virtual networks (VNs) on the core and distribution layer switches. We only show the step-by-step configuration for two devices: Core 1 and Distribution 1.

Refer to Appendix: Full Device Configurations for the full configuration of all devices used in the example topology.

Core 1 Configuration

  1. Set the AS number and configure IBGP neighbors between the core and distribution devices.

    You do not need to configure IBGP neighbors between Core 1 and Core 2 because they receive all BGP updates from Distribution 1 and Distribution 2. Configure the core devices as route reflectors to eliminate the need for a full IBGP mesh between all distribution layer switches. Using route reflection makes the configuration on the distribution layer devices simple and consistent.

    Note:

    For full reachability to all IRB interface addresses, add an IBGP peering between Core 1 and Core 2.

  2. Configure Layer 3 IRB interfaces for the virtual networks and loopback interfaces. The IRB interface units 101, 102 and 103 match the VLAN IDs in our example to represent Servers A, B and C respectively.

  3. Configure the overlay virtual networks under a virtual switch routing instance.

  4. Configure the VRF routing instance for VLAN 101.

  5. Configure the VRF routing instance for VLAN 102. We are using a vrf-import policy and routing-options auto-export to leak routes between the VRFs for VLANs 102 and 103 to allow Servers B and C reachability. The vrf-import policy is shown in step 7.

  6. Configure the VRF routing instance for VLAN 103. We are using a vrf-import policy and routing-options auto-export to leak routes between the VRFs for VLANs 102 and 103 to allow Servers B and C reachability. The vrf-import policy is shown in the next step.

  7. Configure the vrf-import policy to leak routes between the VRFs for VLANs 102 and 103, and apply the policies to the appropriate VRFs as shown in the previous steps. In this example, we are just leaking the IRB interface subnets.

Distribution 1 Configuration

  1. Configure IBGP neighbors from the distribution switch to the core switches.

  2. Configure switch options on the distribution switch.

  3. Enable VXLAN encapsulation.

  4. Configure VLANs and VXLAN mappings.

Configure the SRX Series Device

This section shows how to configure the SRX Series device and the core devices to force Servers A and B to communicate through the SRX Series device. In this example, we allow all traffic to pass through the SRX Series device. In a real deployment, you will use the SRX Series device to enforce security policies and possibly other advanced security features like deep packet inspection or unified threat protection (UTP).

SRX Topology

The SRX Series device advertises the routes associated with Servers A and B. The SRX Series device learns these routes from Core 1 and Core 2 using OSPF. This route exchange allows traffic between Servers A and B, but only when the traffic transits the SRX Series device.

Figure 4 shows the physical topology.

Figure 4: SRX Series Device Topology SRX Series Device Topology

SRX Configuration

Use this section to configure the SRX Series device, Core 1, and Core 2 to allow Servers A and B to communicate. Only the SRX Series device and Core 1 configurations are shown in the step-by-step procedure. Refer to Appendix: Full Device Configurations for the full configuration of all devices used in the example topology.

SRX Series Device Configuration

  1. Configure the interfaces connected to Core 1 and Core 2. Also configure the loopback interface.

  2. Configure the protocol OSPF to advertise routes between the VRF instances on Core 1 and Core 2. We put the configuration in a routing instance to keep the traffic separate from the main instance.

  3. Configure the security zones.

  4. Configure the security policies and address books. We configure an "accept all" policy that allows all traffic between the Servers A and B. This is typical for initial testing. After the expected connectivity is verified, you can put detailed security policies into effect.

Core 1 Configuration

  1. Configure the interface connected to the SRX Series device. We configure two units, one for VLAN 101 traffic and one for VLAN 102 traffic.

  2. Under the routing instance vrf_101, configure the router ID, the OSPF protocol, and the interface associated with VLAN 101.

  3. Under the routing instance vrf_102, configure the router ID, the OSPF protocol, and the interface associated with VLAN 102.

Configure Internet Access

This section shows how to configure the MX Series router and the core devices to allow Server A access to the Internet.

Internet Access Topology

The MX Series router advertises a default route to Core 1 and Core 2 for Internet access. Core 1 and Core 2 copy the default route to the vrf_101 routing instance associated with Server A. The core devices also advertise the route to reach Server A to the MX Series router.

Figure 5 shows the physical topology.

Figure 5: Internet Access Topology Internet Access Topology

Internet Access Configuration

Use this section to configure Internet access for Server A on Core 1, Core 2, and the MX Series router. Only the MX Series router and Core 1 configurations are shown in the step-by-step procedure. Refer to Appendix: Full Device Configurations for the full configuration of all devices used in the example topology.

MX Series Router Configuration

  1. Configure the interfaces connected to the Core 1 and Core 2 devices. Also configure the loopback interface.

  2. Configure the router ID and OSPF protocol on the core devices to learn and advertise routes.

  3. Configure a static default route and related policy to export the static route into OSPF. We use a static default route in this example, whereas a production network is likely to learn a default route via BGP from the Internet service provider.

Core 1 Configuration

  1. Configure a policy to accept the default route from the MX Series router. This route is then leaked into the routing instance for VLAN 101.

  2. Configure a policy to advertise the Server A subnet route to the MX Series router. The route is created in the next step.

  3. Configure a static route for the Server A subnet with a next routing table of vrf_101.inet.0.

    The static route is advertised to the MX Series router. Traffic sent by the MX Series router to Server A uses the vrf_101 routing instance. We also configure rib-groups to import the default route learned from the MX Series router into the vrf_101 routing instance.

  4. Apply the rib-groups and policy configuration to OSPF.

Configure the Access Layer

This section shows how to configure multihomed uplink interfaces from an access layer switch to distribution layer devices. The result is an aggregated Ethernet interface that has members connected to multiple distribution layer devices.

Access Layer Topology

The access layer supports Layer 2 for VLANs. The uplink from the access layer is an aggregated Ethernet link bundle, or link aggregation group (LAG). The LAG ae0 is configured as a trunk port to carry traffic from all access layer VLANs to the distribution layer switches.

Figure 6 shows the physical topology.

Figure 6: Access Layer Topology Access Layer Topology

Access Layer Configuration

Use this example to configure the distribution layer for EVPN multihoming. You also configure a conventional LAG interface on the access layer switch.

Only the Distribution 1, Access 1, and Access 2 configurations are shown in the step-by-step procedure. Refer to Appendix: Full Device Configurations for the Distribution 2 device configuration.

Distribution 1 Configuration

  1. Specify the members of the aggregated Ethernet bundle.

  2. Configure the aggregated Ethernet interface. This includes the Ethernet segment identifier (ESI), which assigns multihomed interfaces into an Ethernet segment. The ESI must match on all multihomed interfaces.

Access Switch 1 Configuration

  1. Specify the link members for the aggregated Ethernet bundle.

  2. Configure the aggregated Ethernet interface.

  3. Configure the VLANs.

  4. Configure the interfaces connected to Servers A and C as trunk ports. Servers A and C are tagged in this example, and therefore the interface-mode is configured as a trunk.

Access Switch 2 Configuration

  1. On Access Switch 2, configure the interface connected to Distribution 2.

  2. Configure the VLANs.

  3. Configure the interface connected to Server B as an access port. Server B is untagged in this example, and therefore the interface-mode is set to access.

Verification

Log in to each device and verify that the EVPN-VXLAN fabric is functional.

Distribution 1

  1. On Distribution 1, verify the state of the BGP sessions with the core devices.

    Verify the Distribution 1 device IBGP sessions are established to the loopback addresses of the core devices. Recall the loopbacks are assigned the 10.1.255.1 and 10.1255.2 IP addresses.

    The IBGP sessions are established with the loopback interfaces of the core devices using MP-IBGP. EVPN signaling is correctly enabled to exchange EVPN routes in the overlay.

  2. Verify that the EVPN database is correctly populated.

    Verify that the EVPN database is installing MAC address information for locally attached hosts. Confirm the local device is receiving advertisements from the other leaf devices with information about remote hosts.

    The output confirms that the EVPN database is properly learning and installing MAC routes for all endpoints. It also shows the relationship between MAC addresses and their associated VNIs: 1101, 1102 and 1103.

    The EVPN database learns MAC addresses with source 00:00:01:01:01:01:01:01:01:01 from the access layer, which is multihomed to the distribution layer. This learning behavior is evidenced by the presence of the ESI—previously configured as 00:00:01:01:01:01:01:01:01:01—as the active source for these entries.

  3. Verify that the local switching table is correctly populated.

    Verify that the local switching table is installing MAC address information for locally attached hosts. Also check that it is receiving MAC advertisements from remote leaf devices to learn about remote hosts.

    The output confirms the local switching table is correctly learning and installing MAC addresses for all endpoints. It also shows the relationship between MAC addresses and the VLANs they are associated with (in this case, VLANs 101, 102 and 103). A next-hop interface is listed for each MAC. Remote MACs are associated with a VTEP for VXLAN tunneling.

  4. Check the multihome connection from Access Switch 1 to the distribution devices. Verify:

    • The local interfaces that are part of the Ethernet segment

    • The remote distribution devices that are part of the same Ethernet segment

    • The bridge domains that are part of the Ethernet segment

    • The designated forwarder for the Ethernet segment

    Interface ae0.0 is part of this Ethernet segment. The virtual networks 101, 102 and 103 are part of this Ethernet segment. The remote provider edge (PE) or distribution device participating in this Ethernet segment is 10.1.255.12.

    In this multihomed Ethernet segment, the local distribution device, Distribution 1, is the designated forwarder for broadcast, unknown unicast, and multicast (BUM) traffic. This means only Distribution 1 forwards BUM traffic into this Ethernet segment.

Core 1

  1. On Core 1, verify the BGP sessions with the core and distribution devices.

    Verify that IBGP sessions are established to the loopbacks of the distribution devices.

    The IBGP sessions are established to the loopback interfaces of the distribution devices using MP-IBGP. EVPN signaling is configured for EVPN route exchange in the overlay.

  2. Verify that the EVPN database is correctly populated.

    Verify that the EVPN database is receiving advertisements from the other distribution devices and is installing MAC address information for devices attached to the access layer. The core devices learn these MAC addresses through EVPN.

    The output confirms the EVPN database is properly learning and installing MAC routes for all endpoints. It also shows the relationship between MAC addresses and the VNIs they are associated with (1101, 1102 and 1103).

  3. Verify that the local switching table is correctly populated.

    Verify that the local switching table is receiving advertisements from the other distribution devices and installing MAC address information for devices attached to the access layer.

    The output confirms the local switching table is correctly learning and installing MAC addresses for all endpoints. It also shows the relationship between MAC addresses, the VLANs they are associated with (in this case, VLANs 101, 102 and 103), and their next-hop interface.

Reachability

  1. Verify Server A to Server B reachability. Confirm Server A can ping Server B by transiting the SRX Series device.

    Verify the route for Server B is in the vrf_101 routing instance, and that the route for Server A is in the vrf_102 routing instance on both core devices.

    Ping Server B from Server A.

    Confirm the flow session on the SRX Series device. You might need to leave the pings going to see the flow session on the SRX Series device. By default, flow sessions time out after a few seconds.

    Trace the route from Server A to Server B to confirm the Layer 3 forwarding hops.

    The outputs above confirms that Server A can ping Server B and that the traffic is transiting the SRX Series device.

    Note:

    Server A should not be able to reach Server C with the configuration in this example.

  2. Verify Server A can reach the Internet through the MX Series router.

    First, verify the default route is in the vrf_101 routing instance, and that the route for Server A is in the inet.0 table with a next hop of the vrf_101.inet.0 table on both core devices.

    Then generate pings and traceroutes to the Internet (192.168.1.1 in our example) from Server A.

    The output confirms that Server A can ping the Internet.

  3. Verify Server C to Server B reachability. Confirm Server C can ping Server B.

    First, verify the routes for each server are present in the vrf_102 and vrf_103routing instances.

    Then generate pings and trace the route to Server B from Server C.

    The output above confirms that Server C can ping Server B.

    You have successfully configured your EVPN-VXLAN fabric for a campus network with CRB.