Understanding Junos Node Slicing

 

Before setting up Junos Node Slicing, it is important to understand the concept of Junos Node Slicing and its various components.

Junos Node Slicing Overview

Junos Node Slicing enables service providers and large enterprises to create a network infrastructure that consolidates multiple routing functions into a single physical device. It helps leverage the benefits of virtualization without compromising on performance. In particular, Junos Node Slicing enables the convergence of multiple services on a single physical infrastructure while avoiding the operational complexity involved. It provides operational, functional, and administrative separation of functions on a single physical infrastructure that enables the network to implement the same virtualization principles the compute industry has been using for years.

Using Junos Node Slicing, you can create multiple partitions in a single physical MX Series router. These partitions are referred to as guest network functions (GNFs). Each GNF behaves as an independent router, with its own dedicated control plane, data plane, and management plane. This enables you to run multiple services on a single converged MX Series router, while still maintaining operational isolation between them. You can leverage the same physical device to create parallel partitions that do not share the control plane or the forwarding plane, but only share the same chassis, space, and power.

You can also send traffic between GNFs through the switch fabric by using an Abstracted Fabric (AF) interface, a pseudo interface that behaves as a first class Ethernet interface. An AF interface facilitates routing control, data, and management traffic between GNFs.

Junos Node Slicing supports multi-version software compatibility, thereby allowing the GNFs to be independently upgraded.

Benefits of Junos Node Slicing

  • Converged network—With Junos Node Slicing, service providers can consolidate multiple network services, such as video edge and voice edge, into a single physical router, while still maintaining operational separation between them. You can achieve both horizontal and vertical convergence. Horizontal convergence consolidates router functions of the same layer to a single router, while vertical convergence collapses router functions of different layers into a single router.

  • Improved scalability—Focusing on virtual routing partitions, instead of physical devices, improves the programmability and scalability of the network, enabling service providers and enterprises to respond to infrastructure requirements without having to buy additional hardware.

  • Easy risk management—Though multiple network functions converge on a single chassis, all the functions run independently, benefiting from operational, functional, and administrative separation. Partitioning a physical system, such as Broadband Network Gateway (BNG), into multiple independent logical instances ensures that failures are isolated. The partitions do not share the control plane or the forwarding plane, but only share the same chassis, space, and power. This means failure in one partition does not cause any widespread service outage.

  • Reduced network costs—Junos Node Slicing enables interconnection of GNFs through internal switching fabrics, which leverages Abstracted Fabric (AF) interface, a pseudo interface that represents a first class Ethernet interface behavior. With AF interface in place, companies no longer need to depend on physical interfaces to connect GNFs, resulting in significant savings.

  • Reduced time-to-market for new services and capabilities—Each GNF can operate on a different Junos software version. This advantage enables companies to evolve each GNF at its own pace. If a new service or a feature needs to be deployed on a certain GNF, and it requires a new software release, only the GNF involved requires an update. Additionally, with the increased agility, Junos Node Slicing enables service providers and enterprises to introduce highly flexible Everything-as-a-service business model to rapidly respond to ever-changing market conditions.

Components of Junos Node Slicing

Junos Node Slicing allows a single MX Series router to be partitioned to appear as multiple, independent routers. Each partition has its own Junos OS control plane, which runs as a virtual machine (VM), and a dedicated set of line cards. Each partition is called a guest network function (GNF).

The MX Series router functions as the base system (BSYS). The BSYS owns all the physical components of the router, including the line cards and the switching fabric. The BSYS assigns line cards to GNFs.

The Juniper Device Manager (JDM) software orchestrates the GNF VMs. In JDM, a GNF VM is referred to as a virtual network function (VNF). A GNF thus comprises a VNF and a set of line cards.

JDM and VNFs are hosted on a pair of external industry standard x86 servers.

Through configuration at the BSYS, you can assign line cards of the chassis to different GNFs. Figure 1 shows three GNFs with their dedicated line cards running on an external server.

Figure 1: GNFs on External Server
GNFs on External Server

See Connecting the Servers and the Router for information about how to connect an MX Series router to a pair of external x86 servers.

Base System (BSYS)

In Junos Node Slicing, the MX Series router functions as the base system (BSYS). The BSYS owns all the physical components of the router, including all line cards and fabric. Through Junos OS configuration at the BSYS, you can assign line cards to GNFs and define Abstracted Fabric (AF) interfaces between GNFs. The BSYS software runs on a pair of redundant Routing Engines of the MX Series router.

Guest Network Function (GNF)

A guest network function (GNF) logically owns the line cards assigned to it by the base system (BSYS), and maintains the forwarding state of the line cards. You can configure multiple GNFs on an MX Series router (see Configuring Guest Network Functions). The Junos OS control plane of each GNF runs as a virtual machine (VM). The Juniper Device Manager (JDM) software, hosted on a pair of x86 servers, orchestrates the GNF VMs. In the JDM, the GNFs are referred to as virtual network functions (VNF).

A GNF is equivalent to a standalone router. GNFs are configured and administered independently, and are operationally isolated from each other.

Creating a GNF requires two sets of configurations, one to be performed at the BSYS, and the other at the JDM.

A GNF is defined by an ID. This ID must be the same at the BSYS and JDM.

The BSYS part of the GNF configuration comprises giving it an ID and a set of line cards.

The JDM part of the GNF configuration comprises specifying the following attributes:

  • A VNF name.

  • A GNF ID. This ID must be the same as the GNF ID used at the BSYS.

  • The MX Series platform type.

  • A Junos OS image to be used for the VNF.

  • The VNF CPU and memory resource profile template.

The server resource template defines the number of dedicated CPU cores and the size of DRAM to be assigned to a GNF. For a list of predefined server resource templates available for GNFs, see the Server Hardware Resource Requirements (Per GNF) section in Minimum Hardware and Software Requirements for Junos Node Slicing.

After a GNF is configured, you can access it by connecting to the virtual console port of the GNF. Using the Junos OS CLI at the GNF, you can then configure the GNF system properties such as hostname and management IP address, and subsequently access it through its management port.

Juniper Device Manager (JDM)

The Juniper Device Manager (JDM), a virtualized Linux container, enables provisioning and management of the GNF VMs.

JDM supports Junos OS-like CLI, NETCONF for configuration and management and SNMP for monitoring.

A JDM instance is hosted on each of the x86 servers. The JDM instances are typically configured as peers that synchronize the GNF configurations: when a GNF VM is created on one server, the backup GNF VM is automatically created on the other server.

An IP address and an administrator account need to be configured on the JDM. After these are configured, you can directly log in to the JDM.

Abstracted Fabric (AF) Interface

Abstracted fabric (AF) interface is a pseudo interface that represents a first class Ethernet interface behavior. An AF interface facilitates routing control and management traffic between guest network functions (GNFs) through the switch fabric. An AF interface is created on a GNF to communicate with its peer GNF when the two GNFs are configured to be connected to each other. AF interfaces must be created at BSYS. The bandwidth of the AF interfaces changes dynamically based on the insertion or reachability of the remote line card/MPC. Because the fabric is the communication medium between GNFs, AFs are considered to be the equivalent WAN interfaces. See Figure 2.

Figure 2: Abstracted Fabric Interface
Abstracted Fabric Interface

Understanding AF Interface Bandwidth

An AF interface connects two GNFs through the fabric and aggregates all the Packet Forwarding Engines (PFEs) that connect the two GNFs. An AF interface can leverage the sum of the bandwidth of each Packet Forwarding Engine belonging to the AF interface.

For example, if GNF1 has one MPC8 (which has four Packet Forwarding Engines with 240 Gbps capacity each), and GNF1 is connected with GNF2 and GNF3 using AF interfaces (af1 and af2), the maximum AF capacity of GNF1 would be 4x240 Gbps = 960 Gbps.

GNF1—af1——GNF2

GNF1—af2——GNF3

Here, af1 and af2 share the 960 Gbps capacity.

For information on the bandwidth supported on each MPC, see Table 1.

Features Supported on AF Interfaces

AF interfaces support the following features:

  • Load balancing based on the remote GNF line cards present

  • Class of service (CoS) support:

    • Inet-precedence classifier and rewrite

    • DSCP classifier and rewrite

    • MPLS EXP classifier and rewrite

    • DSCP v6 classifier and rewrite for IP v6 traffic

  • Support for OSPF, IS-IS, BGP, OSPFv3 protocols, and L3VPN

    Note

    The non-AF interfaces support all the protocols that work on Junos OS.

  • Multicast forwarding

  • Graceful Routing Engine switchover (GRES)

  • MPLS applications where the AF interface acts as a core interface (L3VPN, VPLS, L2VPN, L2CKT, EVPN, and IP over MPLS)

  • The following protocol families are supported:

    • IPv4 Forwarding

    • IPv6 Forwarding

    • MPLS

    • ISO

    • CCC

  • With the AF interface configuration, GNFs support AF-capable MPCs. Table 1 lists the AF-capable MPCs, the number of PFEs supported per MPC, and the bandwidth supported per MPC.

    Table 1: Supported AF-capable MPCs

    MPC

    Initial Release

    Number of PFEs

    Total Bandwidth

    MPC7E-MRATE

    17.4R1

    2

    240G (120*2) on MX240, MX480, MX960 routers

    200G (100*2) on MX2010 and MX2020 routers

    MPC7E-10G

    17.4R1

    2

    240G (120*2) on MX240, MX480, MX960 routers

    200G (100*2) on MX2010 and MX2020 routers

    MX2K-MPC8E

    17.4R1

    4

    960G (240*4)

    MX2K-MPC9E

    17.4R1

    4

    1.6T (400*4)

    MPC2E NG

    17.4R1

    1

    80G

    MPC2E NG Q

    17.4R1

    1

    80G

    MPC3E NG

    17.4R1

    1

    130G

    MPC3E NG Q

    17.4R1

    1

    130G

    MPC5E-40G10G

    18.3R1

    2

    240G (120*2)

    MPC5EQ-40G10G

    18.3R1

    2

    240G (120*2)

    MPC5E-40G100G

    18.3R1

    2

    240G (120*2)

    MPC5EQ-40G100G

    18.3R1

    2

    240G (120*2)

    MX2K-MPC6E

    18.3R1

    4

    520G (130*4)

Note
  • A GNF that does not have the AF interface configuration supports all the MPCs that are supported by a standalone MX Series router. For the list of supported MPCs, see MPCs Supported by MX Series Routers.

  • We recommend that you set the MTU settings on the AF interface to align to the maximum allowed value on the XE/GE interfaces. This ensures minimal or no fragmentation of packets over the AF interface.

AF Interface Restrictions

The following are the current restrictions of AF interfaces:

  • Configurations such as single endpoint AF interface, AF interface-to-GNF mapping mismatch or multiple AF interfaces mapping to same remote GNF are not checked during commit on the BSYS. Ensure that you have the correct configurations.

  • Bandwidth allocation is static, based on the MPC type.

  • AF interfaces do not support the Hyper mode.

  • There can be minimal traffic drops (both transit and host) during the offline/restart of an MPC hosted on a remote GNF.

  • Interoperability between AF-capable MPCs and non-AF capable MPCs is not supported.

Mastership Behavior of BSYS and GNF

The following sections address the mastership behavior of BSYS and GNF in the context of Routing Engine redundancy.

Figure 3 shows the mastership behavior of GNF and BSYS with Routing Engine redundancy.

Figure 3: Mastership Behavior of GNF and BSYS
Mastership Behavior of GNF
and BSYS

BSYS Mastership

The BSYS Routing Engine mastership arbitration behavior is identical to that of Routing Engines on MX Series routers.

GNF Mastership

The GNF VM mastership arbitration behavior is similar to that of MX Series Routing Engines. Each GNF runs as a master-backup pair of VMs. A GNF VM that runs on server0 is equivalent to Routing Engine slot 0 of an MX Series router, and the GNF VM that runs on server1 is equivalent to Routing Engine slot 1 of an MX Series router.

The GNF mastership is independent of the BSYS mastership and that of other GNFs. The GNF mastership arbitration is done through Junos OS. Under connectivity failure conditions, GNF mastership is handled conservatively.

Note

You must configure graceful Routing Engine switchover (GRES) at each GNF. This is a prerequisite for the backup GNF VM to automatically take over the mastership when the master GNF VM fails or is rebooted.

Junos Node Slicing Administrator Roles

The following administrator roles enable you to carry out the node slicing tasks:

  • BSYS administrator—Responsible for the physical chassis, as well as for GNF provisioning (assignment of line cards to GNFs). Junos OS CLI commands are available for these tasks.

  • GNF administrator—Responsible for configuration, operation, and management of Junos OS at the GNF. All regular Junos OS CLI commands are available to the GNF administrator for these tasks.

  • JDM administrator—Responsible for the JDM server port configuration, and for the provisioning and life-cycle management of the GNF VMs (VNFs). JDM CLI commands are available for these tasks.

Multi-Version Software Interoperability Overview

Starting from Junos OS Release 17.4R1, Junos Node Slicing supports multi-version software compatibility, enabling the BSYS to interoperate with a guest network function (GNF) which runs a Junos OS version that is higher than the software version of the BSYS. This feature supports a range of up to two versions between GNF and BSYS. That is, the GNF software can be two versions higher than the BSYS software. Both BSYS and GNF must meet a minimum version requirement of Junos OS Release 17.4R1.

Note

The restrictions in multi-version support are also applicable to the unified ISSU upgrade process.

While JDM software versioning does not have a similar restriction with respect to the GNF or BSYS software versions, we recommend that you regularly update the JDM software. A JDM upgrade does not affect any of the running GNFs.

Licensing for Junos Node Slicing

Operating Junos Node Slicing requires licenses to be installed at the BSYS. Running a GNF without a license installed at the BSYS will result in the following BSYS log message:

Please contact Juniper Networks if you have queries pertaining to Junos Node Slicing licenses.