Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Overview

Enterprise networks are undergoing massive transitions to accommodate the growing demand for cloud-ready, scalable, and efficient network. There’s also demand for the plethora of Internet of Things (IoT) and mobile devices. As the number of devices grows, so does network complexity with an ever-greater need for scalability, segmentation, and security. To meet these challenges, you need a network with Automation and Artificial Intelligence (AI) for operational simplification.

Most traditional campus architectures use single-vendor, chassis-based technologies that work well in small, static campuses with few endpoints. However, they are too rigid to support the scalability and changing needs of modern large Enterprises.

A Juniper Networks EVPN-VXLAN fabric is a highly scalable architecture that is simple, programmable, and built on a standards-based architecture (https://www.rfc-editor.org/rfc/rfc8365) that is common across campuses and data centers.

The Juniper campus architecture uses a Layer 3 (L3) IP-based underlay network and an EVPN-VXLAN overlay network. Broadcast, unknown unicast, and multicast (BUM) traffic is handled natively by EVPN and eliminates the need for Spanning/Rapid Tree Protocols (STP/RSTP). A flexible overlay network based on a VXLAN tunnels combined with an EVPN control plane efficiently provides L3 or Layer 2 (L2) connectivity. This architecture decouples the virtual topology from the physical topology, which improves network flexibility and simplifies network management.

Endpoints that require L2 adjacency, such as IoT devices, can be placed anywhere in the network and remain connected to the same logical L2 network.

With an EVPN-VXLAN campus architecture, you can easily add core, distribution, and access layer devices as your business grows without a need for redesigning. As EVPN-VXLAN is vendor-agnostic, you can use the existing access layer infrastructure and gradually migrate to access layer switches. This supports EVPN-VXLAN capabilities once the core and distribution part of the network is deployed. Connectivity with legacy switches that do not support EVPN VXLAN is accomplished with standards-based ESI-LAG.

Benefits of Campus Fabric Core Distribution

  • With the increasing number of devices connecting to the network, you need to scale your campus network rapidly without adding complexity. Many IoT devices have limited networking capabilities and require L2 adjacency across buildings and campuses. Traditionally, this problem was solved by extending VLANs between endpoints using data plane-based flood and learning mechanisms inherent with Ethernet switching technologies. The traditional Ethernet switching approach is inefficient because it leverages broadcast and multicast technologies to announce Media Access Control (MAC) addresses. It is also difficult to manage because you need to configure and manually manage VLANs to extend them to new network ports. This problem increases multi-fold when you take into consideration the explosive growth of IoT and mobility.
  • A campus fabric based on EVPN-VXLAN is a modern and scalable network that uses BGP as the underlay for the core and distribution layer switches. The distribution and core layer switches function as VXLAN Tunnel Endpoint (VTEP) that encapsulate and decapsulate the VXLAN traffic. In addition, these devices route and bridge packets in and out of VXLAN tunnels.
  • The Campus Fabric Core Distribution extends the EVPN fabric to connect VLANs across multiple buildings. This is done by stretching the L2 VXLAN network with routing occurring in the core (Centrally-Routed Bridging (CRB)) or distribution (Edge Routed Bridging (ERB)) layers. This network architecture supports the core and distribution layers of the topology with integration to access switching via standard Link Aggregation Control Protocol (LACP).
Figure 1: Campus Fabric Core Distribution CRB Diagram Description automatically generated

A Campus Fabric Core Distribution CRB deployment provides the following benefits:

  • Reduced flooding and learning—Control plane-based L2 and L3 learning reduces the flood and learn issues associated with data plane learning. Learning MAC addresses in the forwarding plane has an adverse impact on network performance as the number of endpoints grows. This is because more management traffic consumes the bandwidth which leaves less bandwidth available for production traffic. The EVPN control plane, deployed at the core and distribution layers, handles the exchange and learning of MAC addresses through eBGP routing, rather than a L2 forwarding plane.
  • Scalability—More efficient control plane based L2 and L3 learning. For example, in a Campus Fabric IP Clos, core switches only learn the access layer switches addresses instead of the device endpoint addresses. L3 Default gateways at the core layer provide higher scale than if placed at the distribution or access layers. This is due to higher performance platforms supported at the layer.
  • Consistency—A universal EVPN-VXLAN-based architecture across disparate campus and data center deployments enables a seamless end-to-end network for endpoints and applications.
  • Investment protection—The only requirement to integrate at the access layer is standards based LACP/LAG. This provides investment protection for the section of the network that has the highest cost and footprint.
  • Location-agnostic connectivity—he EVPN-VXLAN campus architecture provides a consistent endpoint experience no matter where the endpoint is located. Some endpoints require L2 reachability, such as legacy building security systems or IoT devices. VXLAN overlay provides L2 extension across campuses without any changes to the underlay network. Juniper uses optimal BGP timers between the adjacent layers of the campus fabric with Bidirectional Forwarding Detection (BFD) that supports fast convergence in event of a node or link failure and Equal cost multipath (ECMP). For more information, see Configuring Per-Packet Load Balancing.