Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?



This section provides a summary of commonly used terms, protocols, and building block technologies used in creating and maintaining data center networks.

Glossary Terms

  • ARP—Address Resolution Protocol. A protocol defined in RFC 826 for mapping a logical IP address to a physical MAC address.

  • Backbone Device—A device in the WAN cloud that is directly connected to a spine device or devices in a data center. Backbone devices are required in this reference topology to provide physical connectivity between data centers that are interconnected using a data center interconnect (DCI).

  • Border Leaf—A device that typically has the sole purpose of providing a connection to one or more external devices. The external devices, for example, multicast gateways or data center gateways, provide additional functionality to the IP fabric.

  • Border Spine—A device that typically has two roles—a network underlay device and a border device that provides a connection to one or more external devices. The external devices, for example, multicast gateways or data center gateways, provide additional functionality to the IP fabric.

  • Bridged Overlay—An Ethernet-based overlay service designed for data center environments that do not require routing within an EVPN/VXLAN fabric. IP routing can provided externally to the fabric as needed.

  • BUM—Broadcast, Unknown Unicast, and Multicast. The BUM acronym collectively identifies the three traffic types.

  • Centrally-Routed Bridging Overlay—A form of IRB overlay that provides routing at a central gateway and bridging at the edge of the overlay network. In an IRB overlay, a routed overlay and one or more bridged overlays connect at one or more locations through the use of IRB interfaces.

  • Clos Network—A multistage network topology first developed by Charles Clos for telephone networks that provides multiple paths to a destination at each stage of the topology. Non-blocking networks are possible in a Clos-based topology.

  • Collapsed Spine fabric—An EVPN fabric design in which the overlay functions are collapsed onto the spine layer rather than distributed between spine and leaf devices. This type of fabric has no leaf layer in the EVPN core; the spine devices connect directly to the access layer.

  • Contrail Command—Contrail Enterprise Multicloud user interface. Provides a consolidated, easy-to-use software designed to automate the creation and management of data center networks.

  • Contrail Enterprise Multicloud—A suite of products and software that combines Contrail Command as a single point of management for private and public clouds, QFX Series switches running Junos OS as an infrastructure for data center networking, and Contrail Insights (formerly known as AppFormix) for telemetry and network visualization.

  • DCI—Data Center Interconnect. The technology used to interconnect separate data centers.

  • Default instance—A global instance in a Juniper Networks device that hosts the primary routing table such as inet.0 (default routing instance) and the primary MAC address table (default switching instance).

  • DHCP relay—A function that allows a DHCP server and client to exchange DHCP messages over the network when they are not in the same Ethernet broadcast domain. DHCP relay is typically implemented at a default gateway.

  • EBGP—External BGP. A routing protocol used to exchange routing information between autonomous networks. It has also been used more recently in place of a traditional Interior Gateway Protocols, such as IS-IS and OSPF, for routing within an IP fabric.

  • Edge-Routed Bridging Overlay—A form of IRB overlay that provides routing and bridging at the edge of the overlay network.

  • End System—An endpoint device that connects into the data center. An end system can be a wide range of equipment but is often a server, a router, or another networking device in the data center.

  • ESI—Ethernet segment identifier. An ESI is a 10-octet integer that identifies a unique Ethernet segment in EVPN. In this blueprint architecture, LAGs with member links on different access devices are assigned a unique ESI to enable Ethernet multihoming.

  • Ethernet-connected Multihoming—An Ethernet-connected end system that connects to the network using Ethernet access interfaces on two or more devices.

  • EVPN—Ethernet Virtual Private Network. A VPN technology that supports bridged, routed, and hybrid network overlay services. EVPN is defined in RFC 7432 with extensions defined in a number of IETF draft standards.

  • EVPN Type 2 Route—Advertises MAC addresses and the associated IP addresses from end systems to devices participating in EVPN.

  • IBGP—Internal BGP. In this blueprint architecture, IBGP with Multiprotocol BGP (MP-IBGP) is used for EVPN signalling between the devices in the overlay.

  • IP Fabric—An all-IP fabric network infrastructure that provides multiple symmetric paths between all devices in the fabric. We support an IP Fabric with IPv4 (an IPv4 Fabric) with all architectures described in this guide, and an IP Fabric with IPv6 (an IPv6 Fabric) with some architectures and on some platforms.

  • IP-connected Multihoming—An IP-connected end system that connects to the network using IP access interfaces on two or more devices.

  • IRB—Integrated Routing and Bridging. A technique that enables routing between VLANs and allows traffic to be routed or bridged based on whether the destination is outside or inside of a bridging domain. To activate IRB, you associate a logical interface (IRB interface) with a VLAN and configure the IRB interface with an IP address for the VLAN subnet.

  • Leaf Device—An access level network device in an IP fabric topology. End systems connect to the leaf devices in this blueprint architecture.

  • MAC-VRF instance—A routing instance type that enables you to configure multiple customer-specific EVPN instances instead of only one instance (the default instance). MAC-VRF instances use a consistent configuration style across platforms. Different MAC-VRF instances can support different Ethernet service types (VLAN-aware and VLAN-based services) on the same device in a data center.

  • Multiservice Cloud Data Center Network—A data center network that optimizes the use of available compute, storage, and network access interfaces by allowing them to be shared flexibly across diverse applications, tenants, and use cases.

  • NDP—Neighbor Discovery Protocol. An IPv6 protocol defined in RFC 4861 that combines the functionality of ARP and ICMP, and adds other enhanced capabilities.

  • NVE—As defined in RFC 8365, A Network Virtualization Overlay Solution Using Ethernet VPN (EVPN), a network virtualization edge is a device that terminates a VXLAN tunnel in a network virtualization overlay. For example, in a centrally routed bridging overlay, we consider spine and leaf devices to be NVE devices.

  • NVO—A network virtualization overlay is a fabric in which we use EVPN as a control plane and VXLAN as a data plane.

  • Routed Overlay—An IP-based overlay service where no Ethernet bridging is required. Also referred to as an IP VPN. In this blueprint architecture, the routed overlay is based on EVPN Type 5 routes and their associated procedures, and supported by VXLAN tunneling.

  • Spine Device—A centrally-located device in an IP fabric topology that has a connection to each leaf device.

  • Storm Control—Feature that prevents BUM traffic storms by monitoring BUM traffic levels and taking a specified action to limit BUM traffic forwarding when a specified traffic level is exceeded.

  • Underlay Network—A network that provides basic network connectivity between devices.In this blueprint architecture, the underlay network is an IP Fabric that provides the basic IP connectivity, usually with IPv4. We also support an IP Fabric with an IPv6 underlay with some architecture designs on certain platforms.

  • VLAN trunking—The ability for one interface to support multiple VLANs.

  • VNI—VXLAN Network Identifier. Uniquely identifies a VXLAN virtual network. A VNI encoded in a VXLAN header can support 16 million virtual networks.

  • VTEP—VXLAN Tunnel Endpoint. A loopback or virtual interface where traffic enters and exits a VXLAN tunnel. Tenant traffic is encapsulated into VXLAN packets at a source VTEP, and de-encapsulated when the traffic leaves the VXLAN tunnel at a remote VTEP.

  • VXLAN—Virtual Extensible LAN. Network virtualization tunneling protocol defined in RFC 7348 used to build virtual networks over an IP-routed infrastructure. VXLAN is used to tunnel tenant traffic over the IP fabric underlay from a source endpoint at an ingress device to a destination endpoint at the egress device. These tunnels are established dynamically by EVPN. Each VTEP device advertises its loopback address in the underlay network for VXLAN tunnel reachability between VTEP devices.

  • VXLAN Stitching—A Data Center Interconnect (DCI) feature that supports Layer 2 interconnection EVPN-VXLAN fabric on a per-VXLAN VNI basis.