Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Technical Overview

 

Virtual Chassis Fabric Overview

The Juniper Networks Virtual Chassis Fabric (VCF) provides a low-latency, high-performance fabric architecture that can be managed as a single device. VCF is an evolution of the Virtual Chassis feature, which enables you to interconnect multiple devices into a single logical device, into of a fabric architecture. The VCF architecture is optimized to support small- and medium-sized data centers that contain a mix of 1-Gbps, 10-Gbps, and 40-Gbps Ethernet interfaces.

A VCF is constructed using a spine-and-leaf architecture, where each spine device is connected to each leaf device. A VCF supports up to twenty total devices, with up to four configured as spine devices.

Figure 1 illustrates a typical VCF spine-and-leaf architecture.

Figure 1: VCF Spine-and-Leaf Architecture
VCF Spine-and-Leaf Architecture

Note that each spine device must be a QFX5100 device, and in an optimal VCF configuration the leaf devices are also QFX5100s. However, you can also create a mixed VCF using QFX3600, QFX3500, and EX4300 switches as leaf devices.

Note

For EVPN-VXLAN integration, the VCF must consist of QFX5100 devices only.

A VCF provides the following benefits:

  • Latency—VCF provides predictable low latency because it uses a fabric architecture that ensures each device is one or two hops away from every other device in the fabric. The weighted algorithm that makes traffic-forwarding decisions in a VCF is designed to avoid congestion and ensures low latency by intelligently forwarding traffic over all paths within the VCF.

  • Resiliency—The VCF architecture provides a resilient framework because traffic has multiple paths across the fabric. Traffic is, therefore, easily diverted within the fabric when a device or link fails.

  • Flexibility—You can easily expand the size of your VCF by adding devices to the fabric as your networking needs grow.

  • Investment protection—In environments that need to expand beyond the capabilities of a traditional QFX5100, QFX3600, QFX3500, or EX4300 Virtual Chassis, a VCF is often a logical upgrade option because it enables the system to evolve by leveraging existing devices.

  • Manageability—VCF provides multiple features that simplify configuration and management, such as autoprovisioning to add new devices into the fabric after minimal initial configuration. VCF also leverages many of the existing configuration procedures from a Virtual Chassis.

Note

For more information on Virtual Chassis Fabric, see the Virtual Chassis Fabric Feature Guide.

Understanding VXLAN

Network overlays are created by encapsulating traffic and tunneling it over a physical network. A number of tunneling protocols can be used in the data center to create network overlays—the most common protocol is Virtual Extensible LAN (VXLAN). The VXLAN tunneling protocol encapsulates Layer 2 Ethernet frames in Layer 3 UDP packets to enable virtual Layer 2 subnets or segments that can span the underlying (physical) Layer 3 network.

In a VXLAN overlay network, each Layer 2 subnet or segment is uniquely identified by a virtual network identifier (VNI). A VNI enables segmenting of traffic the same way that a VLAN ID segments traffic. As is the case with VLANs, endpoints with the same VNI can communicate directly with each other, whereas endpoints on different VNIs require a router, or gateway.

The entity that performs VXLAN encapsulation and decapsulation is called a VXLAN tunnel endpoint (VTEP). VTEPs typically reside in hypervisor hosts, such as ESXi or KVM hosts, but can also reside in network devices to support bare-metal server (BMS) endpoints. Each VTEP is typically assigned a unique IP address.

VXLAN Control Plane Limitations

The VXLAN abstraction does not change the flood and learn behavior of the Ethernet protocol, which has inherent limitations in terms of scalability, efficiency, and utilization.

VXLAN can be deployed as a tunneling protocol across a Layer 3 IP fabric data center without a control plane protocol. Two primary methods exist for doing this: VXLAN with a multicast-enabled underlay, and static unicast VXLAN tunnels. While both are viable options for eliminating Layer 2 in the underlay, neither solves the inherent flood-and- learn problem, and both are difficult to scale to large multitenant environments.

The solution instead is to introduce a control plane to minimize flooding and facilitate learning. To facilitate learning, the control plane distributes end host information to Virtual Tunnel End Points (VTEPs) in the same segment.

An extension to Multiprotocol BGP (MP-BGP) addresses the flood and learn problem. MP-BGP allows the network to carry both Layer 2 MAC and Layer 3 IP information at the same time, and having this combined set of information available for forwarding decisions allows optimized routing and switching. This extension that allows BGP to transport both MAC and IP information is called Ethernet VPN (EVPN).

Note

For more information on VXLAN, see the EVPN Control Plane and VXLAN Data Plane Feature Guide for QFX5100 Switches.

Understanding EVPN

Ethernet VPN (EVPN) is a standards-based protocol that provides virtual multipoint bridged connectivity between different domains over an IP or IP/MPLS backbone network. This control-plane technology uses Multiprotocol BGP (MP-BGP) for MAC and IP address (endpoint) distribution, with MAC addresses being treated as “routes.” As used in data center environments, EVPN enables devices acting as VTEPs to exchange reachability information with each other about their endpoints.

Like other VPN technologies, such as IP VPN and virtual private LAN service (VPLS), EVPN instances (EVIs) are configured on provider edge (PE) routers to maintain logical service separation between customers. PE routers connect to customer edge (CE) devices, which can be routers, switches, or hosts. The PE routers then exchange reachability information using Multiprotocol BGP (MP-BGP), after which encapsulated traffic can be forwarded between them. Because elements of the architecture are common with other VPN technologies, EVPN can be seamlessly introduced and integrated into existing service environments.

In data center environments, PE routers are referred to as leaf devices. These devices connect to upstream devices called spine devices to form a Layer 3 spine-and-leaf architecture, or IP fabric. In these environments, customer edge (CE) devices are typically endpoints such as servers or virtual machines (VMs).

A typical EVPN deployment is shown in Figure 2.

Figure 2: Typical EVPN Environment
Typical EVPN Environment

EVPN enables multitenancy, flexible services that can be extended on demand, frequently using compute resources of different physical data centers. In addition, EVPN’s technical capabilities include:

  • Multihoming—provides multipath forwarding and redundancy through an “all-active” model, allowing an endpoint to connect to two or more leaf devices and forward traffic using all of the links. In the event that an access link or one of the leaf (PE) devices fails, traffic flows from the endpoint (CE device) toward the leaf device using the remaining active link(s). For traffic in the other direction, remote leaf devices update their forwarding tables to send traffic to the remaining active leaf device(s) connected to the multihomed Ethernet segment.

  • Aliasing—leverages all-active multihoming to allow a remote leaf device to load-balance traffic across the network toward the endpoint.

  • Split horizon—prevents the looping of broadcast, unknown unicast, and multicast (BUM) traffic in a network. With split horizon, a packet is never sent back in the direction from which it was received.

  • VM mobility—also known as Virtual Machine Traffic Optimization (VMTO), enables live VMs to be dynamically moved from one data center to another.

Benefits of using EVPNs include:

  • Ability to have an active multihomed edge device

  • Load balancing across multiple links

  • MAC address mobility

  • Multitenancy

  • Aliasing

  • Fast convergence

Note

For more information on EVPN, see Juniper Networks EVPN Implementation for Next-Generation Data Center Architectures  .