Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Technical Overview

Underlay Network

An EVPN-VXLAN fabric architecture makes the network infrastructure simple and consistent across campuses and data centers. All the core, distribution, and access devices must be connected using an L3 infrastructure. We recommend deploying a Clos-based IP fabric to ensure predictable performance and to enable a consistent, scalable architecture.

You can use any L3 routing protocol to exchange loopback addresses between the access, core, and distribution devices. BGP provides benefits such as better prefix filtering, traffic engineering, and route tagging. We are using eBGP as the underlay routing protocol in this example. Mist automatically provisions Private Autonomous System numbers and all BGP configurations for the underlay and overlay for only the campus fabric. There are options to provide additional BGP speakers to allow you to peer with external BGP peers.

Underlay BGP is used to learn loopback addresses from peers so that the overlay BGP can establish neighbors using the loopback address. The overlay is then used to exchange EVPN routes.

Figure 1: Point-to-Point/31 Links Between Adjacent Layers Running eBGP Diagram Description automatically generated

Network overlays enable connectivity and addressing independent of the physical network. Ethernet frames are wrapped in UDP/IP datagrams, which are encapsulated into IP for transport over the underlay. VXLAN enables virtual L2 subnets or VLANs to span underlying physical L3 network.

In a VXLAN overlay network, each L2 subnet or segment is uniquely identified by a Virtual Network Identifier (VNI). A VNI segments traffic the same way that a VLAN ID does. This mapping occurs on the access switches and border gateway, which can reside on the core or services block. As is the case with VLANs, endpoints within the same virtual network can communicate directly with each other.

Endpoints in different virtual networks require a device that supports inter-VXLAN routing, which is typically a router, or a high-end switch known as a L3 gateway. The entity that performs VXLAN encapsulation and decapsulation is called a VXLAN tunnel endpoint (VTEP). Each VTEP is known as the L2 gateway and typically assigned with the device's loopback address. This is also where VXLAN (commonly known as VNI) to VLAN mapping takes place.

Figure 2: VXLAN VTEP Tunnels A diagram of a network Description automatically generated

VXLAN can be deployed as a tunnelling protocol across a L3 IP campus fabric without a control plane protocol. However, the use of VXLAN tunnels alone does not change the flood and learn behavior of the Ethernet protocol.

The two primary methods for using VXLAN without a control plane protocol are static unicast VXLAN tunnels and VXLAN tunnels. These methods are signaled with a multicast underlay and do not solve the inherent flood and learn problem. These methods are also difficult to scale in large, multitenant environments. These methods are not in the scope of this JVD.

Understanding EVPN

Ethernet VPN (EVPN) is a BGP extension to distribute endpoint reachability information such as MAC and IP addresses to other BGP peers. This control plane technology uses Multiprotocol BGP (MP-BGP) for MAC and IP address endpoint distribution, where MAC addresses are treated as Type 2 EVPN routes. EVPN enables devices acting as VTEPs to exchange reachability information with each other about their endpoints.

Juniper supported EVPN Standards: https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/concept/evpn.html

What is EVPN-VXLAN: https://www.juniper.net/us/en/research-topics/what-is-evpn-vxlan.html

The benefits of using EVPNs include:

  • MAC address mobility
  • Multitenancy
  • Load balancing across multiple links
  • Fast convergence
  • High availability
  • Scale
  • Standards-based interoperability

EVPN provides multipath forwarding and redundancy through an all-active model. The access layer can connect to two or more distribution devices and forward traffic using all the links. If an access link or distribution device fails, traffic flows from the access layer toward the distribution layer using the remaining active links. For traffic in the other direction, remote distribution devices update their forwarding tables to send traffic to the remaining active distribution devices connected to the multihomed Ethernet segment.

The technical capabilities of EVPN include:

  • Minimal flooding—EVPN creates a control plane that shares end-host MAC addresses between VTEPs.
  • Multihoming—EVPN supports multihoming for client devices. A control protocol like EVPN that enables synchronization of endpoint addresses between the access switches is needed to support multihoming because traffic traveling across the topology needs to be intelligently moved across multiple paths.
  • Aliasing—EVPN leverages all-active multihoming when connecting devices to the access layer of a campus fabric. The connection of the multihomed access layer switches is called ESI-LAG, while the access layer devices connect to each access switch using standard LACP.
  • Split horizon—Split horizon prevents the looping of broadcast, unknown unicast, and multicast (BUM) traffic in a network. With split horizon, a packet is never sent back over the same interface it was received on, which prevents loops.

Overlay Network (Data Plane)

VXLAN is the overlay data plane encapsulation protocol that tunnels Ethernet frames between network endpoints over the underlay network. Devices that perform VXLAN encapsulation and decapsulation for the network are referred to as VTEP. Before a VTEP sends a frame into a VXLAN tunnel, it wraps the original frame in a VXLAN header that includes a Virtual Network Identifier (VNI). The VNI maps the packet to the original VLAN at the ingress switch. After applying a VXLAN header, the frame is encapsulated into a UDP/IP datagram for transmission to the remote VTEP over the IP fabric, where the VXLAN header is removed and the VNI to VLAN translation happens at the egress switch.

Figure 3: VXLAN Header A picture containing diagram Description automatically generated

VTEPs are software entities tied to the devices’ loopback address that source and terminate VXLAN tunnels. VXLAN tunnels in an IP Clos fabric are provisioned on the following:

  • Access switches to extend services across the campus fabric IP Clos.
  • Core switches, when acting as a border router, interconnect the campus fabric with the outside network.
  • Services block devices that interconnect the campus fabric with the outside network.

Overlay Network (Control Plane)

MP-BGP with EVPN signaling acts as the overlay control plane protocol. Adjacent layer switches set up eBGP peers using their loopback addresses with next hops announced by the underlay BGP sessions. For example, core and distribution devices establish eBGP sessions between each other while the access and distribution devices establish eBGP sessions between each other. When there is an L2 forwarding table update on any switch participating in the campus fabric, it sends a BGP update message with the new MAC route to other devices in the fabric. Those devices then update their local EVPN database and routing tables.

Figure 4: EVPN VXLAN Overlay Network with a Services Block A diagram of a diagram Description automatically generated

Resiliency and Load Balancing

We support Bi-Directional Forwarding (BFD) as part of the BGP protocol implementation. This provides fast convergence in the event of a device or link failure without relying on the routing protocol’s timers. Mist configured BFD minimum intervals of 350ms and 1000ms in the underlay and overlay respectively. Load Balancing, per packet by default, is supported across all links within the campus fabric using equal-cost multipath (ECMP) routing enabled at the forwarding plane.

Ethernet Segment Identifier (ESI)

When devices such as servers and access points are multihomed to two or more switches at the access layer in a campus fabric, an ESI-LAG is formed on the access layer devices. This ESI is a 10-octet integer that identifies the Ethernet segment amongst all access layer switches participating in the ESI. MP-BGP is the control plane protocol used to coordinate this information. ESI-LAG enables link failover in the event of a bad link, supports active-active load balancing, and is automatically assigned by Mist.

Figure 5: Device Resiliency and Load Balancing A diagram of a computer network Description automatically generated

Services Block

You need to position critical infrastructure services off a dedicated access pair of Juniper switches. This can include WAN and firewall connectivity, RADIUS, and DHCP servers for example. If you need to deploy a lean core, the dedicated services block mitigates the need for the core to support encapsulation and de-encapsulation of VXLAN tunnels, multiple routing instances, and additional L3 routing protocols. The services block border capability is supported directly off the core layer or as a dedicated pair of switches.

Figure 6: Services Block A diagram of a server Description automatically generated

Access Layer

The access layer provides network connectivity to end-user devices, such as personal computers, VoIP phones, printers, IoT devices, as well as connectivity to wireless access points. The EVPN-VXLAN network extends all the access layer switches.

Figure 7: Endpoint Access A diagram of a server Description automatically generated

In this example, each access switch or Virtual Chassis is multihomed to two or more distribution switches. Juniper’s Virtual Chassis reduces the number of ports required on distribution switches and optimizes the availability of fiber throughout the campus. The Virtual Chassis supports up to 10 member switches (depending on the switch model) and is managed as a single device. See https://www.juniper.net/documentation/us/en/software/junos/vcf-best-practices-guide/vcf-best-practices-guide.pdf.

With EVPN running as the control plane protocol, any access switch or Virtual Chassis device can enable active-active multihoming to the distribution layer. EVPN provides a standards-based multihoming solution that scales horizontally across any number of access layer switches.

Campus Fabric Organizational Deployment

Mist campus fabric supports deployments at the Site and Organization level. The Organization-based deployment shown in Figure 8, targets enterprises who need to align with a POD structure.

Figure 8: Campus Fabric Node Configuration Graphical user interface Description automatically generated
Note:

Site level deployment is the focus of this JVD

Juniper Access Points

In our network, we choose Mist access points (APs) as our preferred AP devices. They are designed from the ground up to meet the stringent networking needs of the modern cloud and smart device era. Mist delivers unique capabilities for both wired and wireless LAN:

  • Wired and wireless assurance—Mist is enabled with wired and wireless assurance. Once configured, Service Level Expectations (SLE) for key wired and wireless performance metrics such as throughput, capacity, roaming, and uptime are monitored in the Mist platform. This JVD uses Mist wired assurance services.
  • Marvis—An integrated AI engine that provides rapid wired and wireless troubleshooting, trending analysis, anomaly detection, and proactive problem remediation.

Mist Edge

For large campus networks, Mist Edge provides seamless roaming through on-premises tunnel termination of traffic to and from the Juniper APs. Juniper Mist edge extends select microservices to the customer premises while using the Juniper Mist cloud and its distributed software architecture for scalable and resilient operations, management, troubleshooting, and analytics. Juniper Mist edge is deployed as a standalone appliance with multiple variants for different-size deployments.

Evolving IT departments look for a cohesive approach for managing wired, wireless, and wan networks. This full-stack approach simplifies and automates operations, provides end-to-end troubleshooting, and ultimately evolves into the Self-Driving Network. The integration of the Mist platform in this JVD addresses both full-stack deployments and automation. For more details on Mist integration with EX switches, see: How to Connect Mist Access Points and Juniper EX Series Switches.