Solution Benefits
Enterprise networks are undergoing massive transitions to accommodate the growing demand for cloud-ready, scalable, and efficient networks. There’s also demand for the plethora of Internet of Things (IoT) and mobile devices. As the number of devices grows, so does network complexity with an ever-greater need for scalability, segmentation, and security. To meet these challenges, you need a network with automation and Artificial Intelligence (AI) for operational simplification. IP Clos networks provide increased scalability and segmentation using a well understood, standards-based approach through EVPN-VXLAN with group-based policies (GBP).
Most traditional campus architectures use single vendor, chassis-based technologies that work well in small, static campuses with few endpoints. However, they are too rigid to support the scalability and changing needs of modern large enterprises. Multi-Chassis Link Aggregation Group (MC-LAG) is a good example of a single vendor technology that addresses the collapsed core deployment model. In this model, two chassis-based platforms are typically in the core of a customer’s network and deployed to handle all Layer 2 (L2) and Layer 3 (L3) requirements while providing an active-backup resiliency environment. An MC-LAG does not interoperate between vendors and is limited to two devices. The lack of vendor interoperability creates vendor lock-in.
A Juniper Networks EVPN-VXLAN fabric is a highly scalable architecture that is simple, programmable, and built on a standards-based architecture (https://www.rfc-editor.org/rfc/rfc8365) that is common across campuses and data centers.
The Juniper Networks campus architecture uses an L3 IP-based underlay network and an EVPN-VXLAN overlay network. Broadcast, unknown unicast, and multicast (BUM) traffic is handled natively by EVPN and eliminates the need for Spanning Tree (STP) or Rapid Spanning Tree Protocols (RSTP). A flexible overlay network based on VXLAN tunnels combined with an EVPN control plane efficiently provides L3 or L2 connectivity. This architecture decouples the virtual topology from the physical topology, which improves network flexibility and simplifies network management. Endpoints that require L2 adjacency, such as IoT devices, can be placed anywhere in the network and remain connected to the same logical L2 network.
With an EVPN-VXLAN campus architecture, you can easily add core, distribution, and access layer devices as your business grows without a need for redesigning your network. As EVPN-VXLAN is vendor-agnostic, you can use the existing access layer infrastructure and gradually migrate to access layer switches. This supports EVPN-VXLAN capabilities once the core and access part of the network is deployed. A distribution switch layer between access and core is optional and recommended for scale designs with multiple PoDs.
Benefits of Campus Fabric IP Clos
- With the increasing number of devices connecting to the network, you need to scale your campus network rapidly without adding complexity. Many IoT devices have limited networking capabilities and require L2 adjacency across buildings and campuses. Traditionally, this problem was solved by extending virtual LANs (VLANs) between endpoints using data plane-based flood and learning mechanisms inherent with Ethernet switching technologies. The traditional Ethernet switching approach is inefficient because it leverages broadcast and multicast technologies to announce Media Access Control (MAC) addresses. It is also difficult to manage because you need to configure and manually manage VLANs to extend them to new network ports. This problem increases multifold when you take into consideration the explosive growth of mobile and IoT devices.
- Campus fabrics have an underlay topology with a routing protocol that ensures loopback interface reachability between nodes. Devices participating in EVPN-VXLAN function as VXLAN tunnel endpoints (VTEPs) that encapsulate and decapsulate the VXLAN traffic. A VTEP represents a construct within the switching platform that originates and terminates VXLAN tunnels. In addition, these devices route and bridge packets in and out of VXLAN tunnels as required.
- The campus fabric IP Clos extends the EVPN fabric to connect VLANs across multiple buildings or floors of a single building. This is done by stretching the L2 VXLAN network with routing occurring in the access device instead of in the core (Centrally-Routed Bridging (CRB)) or distribution (Edge Routed Bridging (ERB)) devices.
An IP Clos network encompasses the distribution, core, and access layers of your topology.
An EVPN-VXLAN fabric solves the problems of previous architectures and provides the following benefits:
- Reduced flooding and learning—Control plane-based L2 and L3 learning reduces the flood and learn issues associated with data plane learning. Learning MAC addresses in the forwarding plane has an adverse impact on network performance as the number of endpoints grows. This is because more management traffic consumes the bandwidth which leaves less bandwidth available for production traffic. The EVPN control plane handles the exchange and learning of MAC addresses through eBGP routing, rather than an L2 forwarding plane.
- Scalability—More efficient control plane-based L2 and L3 learning. For example, in a campus fabric IP Clos, core switches only learn the access layer switches addresses instead of the device endpoint addresses.
- Consistency—A universal EVPN-VXLAN-based architecture across disparate campus and data center deployments enables a seamless end-to-end network for endpoints and applications.
- Group-based policies—With GBP, you can enable microsegmentation with EVPN-VXLAN to provide traffic isolation within and between broadcast domains as well as simplify security policies across a campus fabric.
- Location-agnostic connectivity—The EVPN-VXLAN campus architecture provides a consistent endpoint experience no matter where the endpoint is located. Some endpoints require L2 reachability, such as legacy building security systems or IoT devices. The VXLAN overlay provides an L2 extension across campuses without any changes to the underlay network. Juniper Networks uses optimal BGP timers between the adjacent layers of the campus fabric with Bidirectional Forwarding Detection (BFD) that supports fast convergence in the event of a node or link failure and equal-cost multipath (ECMP). For more information, see Configuring Per-Packet Load Balancing.