Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Understanding QFabric System Terminology

 

To understand the QFabric system environment and its components, you should become familiar with the terms defined in Table 1.

Table 1: QFabric System Terms

Term

Definition

Clos network fabric

Three-stage switching network in which switch elements in the middle stages are connected to all switch elements in the ingress and egress stages. In the case of QFabric system components, the three stages are represented by an ingress chipset, a midplane chipset, and an egress chipset in an Interconnect device (such as a QFX3008-I Interconnect device). In Clos networks, which are well known for their nonblocking properties, a connection can be made from any idle input port to any idle output port, regardless of the traffic load in the rest of the system.

Director device

Hardware component that processes fundamental QFabric system applications and services, such as startup, maintenance, and inter-QFabric system device communication. A set of Director devices with hard drives can be joined to form a Director group, which provides redundancy and high availability by way of additional memory and processing power. (See also Director group.)

Director group

Set of Director devices that host and load-balance internal processes for the QFabric system. The Director group handles tasks such as QFabric system network topology discovery, Node and Interconnect device configuration, startup, and DNS, DHCP, and NFS services. Operating a Director group is a minimum requirement to manage a QFabric system.

The Director group runs the Director software for management applications and runs dual processes in active/standby mode for maximum redundancy and high availability. (See also Director software and Director device.)

Director software

Software that handles QFabric system administration tasks, such as fabric management and configuration. The Junos OS-based Director software runs on the Director group, provides a single, consolidated view of the QFabric system, and enables the main QFabric system administrator to configure, manage, monitor, and troubleshoot QFabric system components from a centralized location. To access the Director software, log in to the default partition. (See also Director device and Director group.)

fabric control Routing Engine

Virtual Junos OS Routing Engine instance used to control the exchange of routes and flow of data between QFabric system hardware components within a partition. The fabric control Routing Engine runs on the Director group.

fabric manager Routing Engine

Virtual Junos OS Routing Engine instance used to control the initialization and maintenance of QFabric system hardware components belonging to the default partition. The fabric manager Routing Engine runs on the Director group.

infrastructure

QFabric system services processed by the virtual Junos Routing Engines operating within the Director group. These services, such as fabric management and fabric control, support QFabric system functionality and high availability.

Interconnect device

QFabric system component that acts as the primary fabric for data plane traffic traversing the QFabric system between Node devices. Examples of Interconnect devices include the QFX3008-I Interconnect device in a QFX3000-G QFabric system, the QFX5100-24Q configured as an Interconnect device, and the QFX3600-I Interconnect device in a QFX3000-M QFabric system. (See also Node device.)

Junos Space

Carrier-class network management system for provisioning, monitoring, and diagnosing Juniper Networks routing, switching, security, and data center platforms.

network Node group

Set of one to eight Node devices that connects to an external network.

network Node group Routing Engine

Virtual Junos OS Routing Engine instance that handles routing processes for a network Node group. The network Node group Routing Engine runs on the Director group.

Node device

Routing and switching device that connects to endpoints (such as servers or storage devices) or external network peers, and is connected to the QFabric system through an Interconnect device. You can deploy Node devices similarly to the way a top-of-rack switch is implemented. Examples of Node devices include the QFX3500 Node device, QFX3600 Node device, and QFX5100 Node device. (See also Interconnect device and network Node group.)

partition

Collection of physical or logical QFabric system hardware components (such as Node devices) that provides fault isolation, separation, and security.

In their initial state, all QFabric system components belong to a default partition.

QFabric system

Highly scalable, distributed, Layer 2 and Layer 3 networking architecture that provides a high-performance, low-latency, and unified interconnect solution for next-generation data centers. A QFabric system collapses the traditional multi-tier data center model, enables the consolidation of data center endpoints (such as servers, storage devices, memory, appliances, and routers), and provides better scaling and network virtualization capabilities than traditional data centers.

Essentially, a QFabric system can be viewed as a single, nonblocking, low-latency switch that supports thousands of 10-Gigabit Ethernet ports or 2-Gbps, 4-Gbps or 8-Gbps Fibre Channel ports to interconnect servers, storage, and the Internet across a high-speed, high-performance fabric. The QFabric system must have sufficient resources and devices allocated to handle the Director group, Node device, and Interconnect device functions and capabilities.

QFabric system control plane

Internal network connection that carries control traffic between QFabric system components. The QFabric system control plane includes management connections between the following QFabric system hardware and software components:

  • Node devices, such as the QFX3500 Node device.

  • Interconnect devices, such as the QFX3008-I Interconnect device.

  • Director group processes, such as management applications, provisioning, and topology discovery.

  • Control plane Ethernet switches to provide interconnections to all QFabric system devices and processes. For example, you can use EX Series EX4200 switches running in Virtual Chassis mode for this purpose.

To maintain high availability, the QFabric system control plane uses a different network than the QFabric system data plane, and uses a fabric provisioning protocol and a fabric management protocol to establish and maintain the QFabric system.

QFabric system data plane

Redundant, high-performance, and scalable data plane that carries QFabric system data traffic. The QFabric system data plane includes the following high-speed data connections:

  • 10-Gigabit Ethernet connections between QFabric system endpoints (such as servers or storage devices) and Node devices.

  • 40-Gbps quad small form-factor pluggable plus (QSFP+) connections between Node devices and Interconnect devices.

  • 10-Gigabit Ethernet connections between external networks and a Node device acting as a network Node group.

To maintain high availability, the QFabric system data plane is separate from the QFabric system control plane.

QFabric system endpoint

Device connected to a Node device port, such as a server, a storage device, memory, an appliance, a switch, or a router.

QFabric system fabric

Distributed, multistage network that consists of a queuing and scheduling system that is implemented in the Node device, and a distributed cross-connect system that is implemented in Interconnect devices. The QFabric system fabric is part of the QFabric system data plane.

QFX3500 Node device

Node device that connects to either endpoint systems (such as servers and storage devices) or external networks in a QFabric system. It is packaged in an industry-standard 1U, 19-inch rack-mounted enclosure.

The QFX3500 Node device provides up to 48 10-Gigabit Ethernet interfaces to connect to the endpoints. Twelve of these 48 interfaces can be configured to support 2-Gbps, 4-Gbps or 8-Gbps Fibre Channel, and 36 of the interfaces can be configured to support Gigabit Ethernet. Also, there are four uplink connections to connect to Interconnect devices in a QFabric system. These uplinks use 40-Gbps quad small form-factor pluggable plus (QSFP+) interfaces. (See also QFX3500 switch.)

QFX3500 switch

Standalone data center switch with 10-Gigabit Ethernet access ports and 40-Gbps quad, small form-factor pluggable plus (QSFP+) uplink interfaces. You can (optionally) configure some of the access ports as 2-Gbps, 4-Gbps, or 8-Gbps Fibre Channel ports or Gigabit Ethernet ports.

The QFX3500 switch can be converted to a QFabric system Node device as part of a complete QFabric system. The switch is packaged in an industry-standard 1U, 19-inch rack-mounted enclosure. (See also QFX3500 Node device.)

QFX3600 Node device

Node device that connects to either endpoint systems (such as servers and storage devices) or external networks in a QFabric system. It is packaged in an industry-standard 1U, 19-inch rack-mounted enclosure.

The QFX3600 Node device provides 16 40-Gbps QSFP+ ports. By default, 4 ports (labeled Q0 through Q3) are configured for 40-Gbps uplink connections between your Node device and your Interconnect device, and 12 ports (labeled Q4 through Q15) use QSFP+ direct-attach copper (DAC) breakout cables or QSFP+ transceivers with fiber breakout cables to support 48 10-Gigabit Ethernet interfaces for connections to either endpoint systems (such as servers and storage devices) or external networks. Optionally, you can choose to configure the first eight ports (Q0 through Q7) for uplink connections between your Node device and your Interconnect device, and ports Q2 through Q15 for 10-Gigabit Ethernet connections to either endpoint systems or external networks. (See also QFX3600 switch.)

QFX3600 switch

Standalone data center switch with 16 40-Gbps quad, small form-factor pluggable plus (QSFP+) interfaces. By default, all the 16 ports operate as 40-Gigabit Ethernet ports. Optionally, you can choose to configure the 40-Gbps ports to operate as four 10-Gigabit Ethernet ports. You can use QSFP+ to four SFP+ breakout cables to connect the 10-Gigabit Ethernet ports to other servers, storage, and switches.

The QFX3600 switch can be converted to a QFabric system Node device as part of a complete QFabric system. The switch is packaged in an industry-standard 1U, 19-inch rack-mounted enclosure. (See also QFX3600 Node device.)

QFX5100 Node device

QFabric system Node device that connects to either endpoint systems (such as servers and storage devices) or external networks. All three supported models are packaged in an industry-standard 1U, 19-inch rack-mounted enclosure. A QFX5100 Node device can be any of these models:

  • QFX5100-48S

    By default, the QFX5100-48S Node device provides 48 10-Gigabit Ethernet interfaces to connect to the endpoints. There are also six 40-Gbps quad small form-factor pluggable plus (QSFP+) interfaces, of which four are uplinks (FTE).

  • QFX5100-48T

    By default, the QFX5100-48T Node device provides 48 10GBASE-T interfaces to connect to endpoints. There are also six 40-Gbps QSFP+ interfaces, of which four are uplinks (FTE)

  • QFX5100-24Q

    By default, the QFX5100-24Q Node device provides 24 40-Gigabit Ethernet QSFP+ interfaces to connect to the endpoints. The QFX5100-24Q has two expansion bays. The number of additional interfaces available depends on the expansion module and the System mode configured for the Node device.

By default, on the QFX5100-48S Node device and QFX5100-48T Node device, the first 4 ports (labeled fte-0/1/0 through fte-0/1/3) are configured for 40-Gbps uplink connections between your Node device and your Interconnect devices, and 2 ports (labeled xle-0/1/4 and xle-0/1/5) use QSFP+ direct-attach copper (DAC) breakout cables or QSFP+ transceivers with fiber breakout cables to support 8 10-Gigabit Ethernet interfaces for connections to either endpoint systems (such as servers and storage devices) or external networks. Optionally, you can choose to configure the middle 2 ports (xle-0/1/2 and xle-0/1/3) for additional connections to either endpoint systems or external networks.

(See also QFX3500 Node device and QFX3600 Node device.)

redundant server Node group

Set of two Node devices that connect to servers or storage devices. Link aggregation group (LAG) interfaces can span the Node devices within a redundant server Node group.

rolling upgrade

Method used in the QFabric system to upgrade the software for components in a systematic, low-impact way. A rolling upgrade begins with the Director group, proceeds to the fabric (Interconnect devices), and finishes with the Node groups.

Routing Engine

Juniper Networks-proprietary processing entity that implements QFabric system control plane functions, routing protocols, system management, and user access. Routing Engines can be either physical or virtual entities.

The Routing Engine functions in a QFabric system are sometimes handled by Node devices (when connected to endpoints), but mostly implemented by the Director group (to provide support for QFabric system establishment, maintenance, and other tasks).

routing instance

Private collection of routing tables, interfaces, and routing protocol parameters unique to a specific customer. The set of interfaces is contained in the routing tables, and the routing protocol parameters control the information in the routing tables.

(See also virtual private network.)

server Node group

Set of one or more Node devices that connect to servers or storage devices.

virtual LAN (VLAN)

Unique Layer 2 broadcast domain for a set of ports selected from the components available in a partition. VLANs allow manual segmentation of larger Layer 2 networks and help to restrict access to network resources. To interconnect VLANs, Layer 3 routing is required.

virtual private network (VPN)

Layer 3 routing domain within a partition. VPNs maintain privacy with a tunneling protocol, encryption, and security procedures. In a QFabric system, a Layer 3 VPN is configured as a routing instance.

flow group

Force redundant multicast streams to flow through different interconnect devices to prevent a single interconnect device from potentially dropping both streams of multicast traffic during a failure.