Setting Up Junos Node Slicing

 

Before proceeding to perform the Junos Node Slicing setup tasks, you must have completed the procedures described in the chapter Preparing for Junos Node Slicing Setup.

Configuring an MX Series Router to Operate in BSYS Mode

Note

Ensure that the MX Series router is connected to the x86 servers as described in Connecting the Servers and the Router.

Junos Node Slicing requires the MX Series router to function as the base system (BSYS).

Use the following steps to configure an MX Series router to operate in BSYS mode:

  1. Install the Junos OS package for BSYS on both the Routing Engines of the MX Series router.

    To download the package:

    1. Go to the Juniper support page for Junos Node Slicing.

    2. Click Base System > Junos OS version number > Junos version number (64-bit High-End).

    3. On the Software Download page, select the I Agree option under End User License Agreement and then click Proceed.

  2. On the MX Series router, run the show chassis hardware command and verify that the transceivers on both the Control Boards (CBs) are detected. The following text represents a sample output:
    root@router> show chassis hardware
  3. On the MX Series router, apply the following configuration statements:
    root@router# set chassis network-slices guest-network-functions
    root@router# set chassis redundancy graceful-switchover
    root@router# set chassis network-services enhanced-ip
    root@router# set routing-options nonstop-routing
    root@router# set system commit synchronize
    root@router# commit
    Note

    On MX960 routers, you must configure the network-services mode as enhanced-ip or enhanced-ethernet. On MX2020 routers, the enhanced-ip configuration statement is already enabled by default .

    The router now operates in BSYS mode.

Note

A router in the BSYS mode is not expected to run features other than the ones required to run the basic management functionalities in Junos Node Slicing. For example, the BSYS is not expected to have interface configurations associated with the line cards installed in the system. Instead, guest network functions (GNFs) will have the full-fledged router configurations.

Installing JDM RPM Package on x86 Servers Running RHEL

Before installing the JDM RPM package for x86 servers, ensure that you have installed the additional packages, as described in Installing Additional Packages for JDM.

Download and install the JDM RPM package for x86 servers running RHEL as follows:

To download the package:

  1. Go to the Juniper support page for Junos Node Slicing.

  2. Click JDM > Junos OS version number > Juniper Device Manager version number (for Redhat).

  3. On the Software Download page, select the I Agree option under the End User License Agreement and then click Proceed.

To install the package on x86 servers running RHEL, perform the following steps on each of the servers:

  1. Disable SELINUX and reboot the server. You can disable SELINUX by setting the value for SELINUX to disabled in the /etc/selinux/config file.
  2. Install the JDM RPM package (indicated by the .rpm extension) by using the following command. An example of the JDM RPM package used is shown below:

    root@Linux Server0# rpm -ivh jns-jdm-1.0-0-17.4R1.13.x86_64.rpm

Repeat the steps for the second server.

Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04

Before installing the JDM Ubuntu package for x86 servers, ensure that you have installed the additional packages. For more details, see Installing Additional Packages for JDM.

Download and install the JDM Ubuntu package for x86 servers running Ubuntu 16.04 as follows:

To download the JDM Ubuntu package:

  1. Go to the Juniper support page for Junos Node Slicing.

  2. Click JDM > Junos OS version number > Juniper Device Manager version number (for Debian).

  3. On the Software Download page, select the I Agree option under the End User License Agreement and then click Proceed.

To install the JDM package on the x86 servers running Ubuntu 16.04, perform the following steps on each of the servers:

  1. Disable apparmor and reboot the server.

    root@Linux Server0# systemctl stop apparmor

    root@Linux Server0# systemctl disable apparmor

    root@Linux Server0# reboot

  2. Install the JDM Ubuntu package (indicated by the .deb extension) by using the following command. An example of the JDM Ubuntu package used is shown below:

Repeat the steps for the second server.

Configuring JDM on the x86 Servers

Use the following steps to configure JDM on each of the x86 servers.

  1. At each server, start the JDM, and assign identities for the two servers as server0 and server1, respectively, as follows:

    On one server, run the following command:

    root@Linux server0# jdm start server=0

    On the other server, run the following command:

    root@Linux server1# jdm start server=1

    Note

    The identities, once assigned, cannot be modified without uninstalling the JDM and then reinstalling it:

  2. Enter the JDM console on each server by running the following command:

    root@Linux Server0# jdm console

  3. Log in as the root user.
  4. Enter the JDM CLI by running the following command:

    root@jdm% cli

    Note

    The JDM CLI is similar to the Junos OS CLI.

  5. Set the root password for the JDM.

    root@jdm# set system root-authentication plain-text-password

    Note
  6. Commit the changes:

    root@jdm# commit

  7. Enter Ctrl-] to exit from the JDM console.
  8. From the Linux host, run the ssh jdm command to log in to the JDM shell.

Configuring Non-Root Users in JDM (Junos Node Slicing)

In the external server model, you can create non-root users on Juniper Device Manager (JDM) for Junos Node Slicing, starting in Junos OS Release 18.3R1. You need a root account to create a non-root user. The non-root users can log in to JDM by using the JDM console or through SSH. Each non-root user is provided a username and assigned a predefined login class.

The non-root users can perform the following functions:

  • Interact with JDM.

  • Orchestrate and manage Guest Network Functions (GNFs).

  • Monitor the state of the JDM, the host server and the GNFs by using JDM CLI commands.

Note

The non-root user accounts function only inside JDM, not on the host server.

To create non-root users in JDM:

  1. Log in to JDM as a root user.
  2. Define a user name and assign the user with a predefined login class.

    root@jdm# set system login user username class predefined-login-class

  3. Set the password for the user.

    root@jdm# set system login user username authentication plain-text-password

  4. Commit the changes.

    root@jdm# commit

Table 1 contains the predefined login classes that JDM supports for non-root users:

Table 1: Predefined Login Classes

Login Class

Permissions

super-user

  • Create, delete, start and stop GNFs.

  • Start and stop daemons inside the JDM.

  • Execute all CLIs.

  • Access the shell.

operator

  • Start and stop GNFs.

  • Restart daemons inside the JDM.

  • Execute all basic CLI operational commands (except the ones which modify the GNFs or JDM configuration).

read-only

Similar to operator class, except that the users cannot restart daemons inside JDM.

unauthorized

Ping and traceroute operations.

Configuring x86 Server Interfaces in JDM

In the JDM, you must configure:

  • The two 10-Gbps server ports that are connected to the MX Series router.

  • The server port to be used as the JDM management port.

  • The server port to be used as the GNF management port.

Therefore, you need to identify the following on each server before starting the configuration of the ports:

  • The server interfaces (for example, p3p1 and p3p2) that are connected to CB0 and CB1 on the MX Series router.

  • The server interfaces (for example, em2 and em3) to be used for JDM management and GNF management.

For more information, see the figure Connecting the Servers and the Router.

Note
  • You need this information for both server0 and server1.

  • These interfaces are visible only on the Linux host.

To configure the x86 server interfaces in JDM, perform the following steps on both the servers:

  1. On server0, apply the following configuration statements:
    root@jdm# set groups server0 server interfaces cb0 p3p1
    root@jdm# set groups server0 server interfaces cb1 p3p2
    root@jdm# set groups server1 server interfaces cb0 p3p1
    root@jdm# set groups server1 server interfaces cb1 p3p2
    root@jdm# set apply-groups [ server0 server1 ]
    root@jdm# commit
    root@jdm# set groups server0 server interfaces jdm-management em2
    root@jdm# set groups server0 server interfaces vnf-management em3
    root@jdm# set groups server1 server interfaces jdm-management em2
    root@jdm# set groups server1 server interfaces vnf-management em3
    root@jdm# commit
  2. Repeat the step 1 on server1.Note

    Ensure that you apply the same configuration on both server0 and server1.

  3. Share the ssh identities between the two x86 servers.

    At both server0 and server1, run the following JDM CLI command:

    root@jdm> request server authenticate-peer-server

    Note

    The request server authenticate-peer-server command displays a CLI message requesting you to log in to the peer server using ssh to verify the operation. To log in to the peer server, you need to prefix ip netns exec jdm_nv_ns to ssh root@jdm-server1.

    For example, to log in to the peer server from server0, exit the JDM CLI, and use the following command from JDM shell:

    root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server1

    Similarly, to log in to the peer server from server1, use the following command:

    root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server0
  4. Apply the configuration statements in the JDM CLI configuration mode to set the JDM management IP address, default route, and the JDM hostname for each JDM instance as shown in the following example.Note

    The management IP address and default route must be specific to your network.

    root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 10.216.105.112/21
    root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 10.216.105.113/21
    root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
    root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
    root@jdm# set groups server0 system host-name test-jdm-server0
    root@jdm# set groups server1 system host-name test-jdm-server1
    root@jdm# commit
    Note
    • jmgmt0 stands for the JDM management port. This is different from the Linux host management port. Both JDM and the Linux host management ports are independently accessible from the management network.

  5. Run the following JDM CLI command on each server and ensure that all the interfaces are up.
    root@jdm> show server connections
Note

For sample JDM configurations, see Sample Configuration for Junos Node Slicing.

If you want to modify the server interfaces configured in the JDM, perform the following steps:

  1. Stop all running GNFs.
    root@jdm> request virtual-network-functions gnf-name stop
  2. From the configuration mode, deactivate the virtual network functions configuration, and then commit the change.
    root@jdm# deactivate virtual-network-functions
    root@jdm# commit
  3. Configure and commit the new interfaces as described in the step 1 of the main procedure.
  4. Reboot the JDM from the shell.
    root@jdm:~# reboot
  5. From the configuration mode, activate the virtual network functions configuration, and then commit the change.
    root@jdm# activate virtual-network-functions
    root@jdm# commit

Configuring Guest Network Functions

Configuring a guest network function (GNF) comprises two tasks, one to be performed at the BSYS and the other at the JDM.

Note
  • Before attempting to create a GNF, you must ensure that the servers have sufficient resources (CPU, memory, storage) for that GNF.

  • You need to assign an ID to each GNF. This ID must be the same at the BSYS and the JDM.

At the BSYS, specify a GNF by assigning it an ID and a set of line cards by applying the configuration as shown in the following example:

user@router# set chassis network-slices guest-network-functions gnf 1 fpcs 4

user@router# commit

In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). A VNF has the following attributes:

  • A VNF name.

  • A GNF ID. This ID must be the same as the GNF ID used at the BSYS.

  • The MX Series platform type.

  • A Junos OS image to be used for the GNF.

  • The VNF CPU and memory resource profile template.

To configure a VNF, perform the following steps:

  1. Retrieve the Junos OS image for GNFs and place it in the host OS directory /var/jdm-usr/gnf-images/ on both the servers.

    To download the package:

    1. Go to the Juniper support page for Junos Node Slicing.

    2. Click GNF > Junos OS version number > Junos version number (Guest Network Function) .

    3. On the Software Download page, select the I Agree option under the End User License Agreement and then click Proceed.

  2. Assign this image to a GNF by using the JDM CLI command as shown in the following example:
    root@test-jdm-server0> request virtual-network-functions test-gnf add-image /var/jdm-usr/gnf-images/junos-install-ns-mx-x86-64-17.4R1.10.tgz all-servers
  3. Configure the VNF by applying the configuration statements as shown in the following example:

    root@test-jdm-server0# set virtual-network-functions test-gnf id 1

    root@test-jdm-server0# set virtual-network-functions test-gnf chassis-type mx2020

    root@test-jdm-server0# set virtual-network-functions test-gnf resource-template 2core-16g

    To also specify a baseline or initial Junos OS configuration for a GNF, prepare the GNF configuration file (example: /var/jdm-usr/gnf-config/test-gnf.conf) on both the servers and specify the filename as the parameter in the base-config statement as shown below:

    root@test-jdm-server0# set virtual-network-functions test-gnf base-config /var/jdm-usr/gnf-config/test-gnf.conf

    root@test-jdm-server0# commit synchronize

    Note

    Ensure that:

    • You use the same GNF ID as the one specified earlier in BSYS.

    • The baseline configuration filename (with the path) is the same on both the servers.

    • The syntax of the baseline file contents is in the Junos OS configuration format.

    • The GNF name used here is the same as the one assigned to the Junos OS image for GNF in the step 2.

  4. To verify that the VNF is created, run the following JDM CLI command:

    root@test-jdm-server0> show virtual-network-functions test-gnf

  5. Log in to the console of the VNF by issuing the following JDM CLI command:

    root@test-jdm-server0> request virtual-network-functions test-gnf console

  6. Configure the VNF the same way as you configure an MX Series Routing Engine.
Note
  • For sample configurations, see Sample Configuration for Junos Node Slicing.

  • If you had previously brought down any physical x86 CB interfaces or the GNF management interface from Linux shell (by using the command ifconfig interface-name down), these will automatically be brought up when the GNF is started.

Chassis Configuration Hierarchy at BSYS and GNF

In Junos Node Slicing, the BSYS owns all the physical components of the router, including the line cards and fabric, while the GNFs maintain forwarding state on their respective line cards. In keeping with this split responsibility, Junos CLI configuration under the chassis hierarchy (if any), should be applied at the BSYS or at the GNF as follows:

  • Physical-level parameters under the chassis configuration hierarchy should be applied at the BSYS. For example, the configuration for handling physical errors at an FPC is a physical-level parameter, and should therefore be applied at the BSYS.

    At BSYS Junos CLI:
    [edit]
    user@router# set chassis fpc fpc slot error major threshold threshold value action alarm

  • Logical or feature-level parameters under the chassis configuration hierarchy should be applied at the GNF associated with the FPC. For example, the configuration for max-queues per line card is a logical-level parameter, and should therefore be applied at the GNF.

    At GNF Junos CLI:
    [edit]
    user@router# set chassis fpc fpc slot max-queues value
  • As exceptions, the following two parameters under the chassis configuration hierarchy should be applied at both BSYS and GNF:

    At both BSYS and GNF CLI:
    [edit]
    user@router# set chassis network-services network services mode
    user@router# set chassis fpc fpc slot flexible-queueing-mode

Configuring Abstracted Fabric Interfaces Between a Pair of GNFs

Creating an Abstracted Fabric (AF) interface between two guest network functions (GNFs) involves configurations both at the base system (BSYS) and at the GNF. AF interfaces are created on GNFs based on the BSYS configuration, which is then sent to those GNFs.

Note

Only one AF interface can be configured between a pair of GNFs.

To configure AF interfaces between a pair of GNFs:

  1. At the BSYS, apply the configuration as shown in the following example:
    user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf id 4
    user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf af2
    user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf id 2
    user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf af4

    In this example, af2 is the Abstracted Fabric interface instance 2 and af4 is the Abstracted Fabric interface instance 4.

    Note

    The allowed AF interface values range from af0 through af9.

    The GNF AF interface will be visible and up. You can configure an AF interface the way you configure any other interface.

  2. At the GNF, apply the configuration as shown in the following example:
    user@router-gnf-b# set interfaces af4 unit 0 family inet address 10.10.10.1/24
    user@router-gnf-d# set interfaces af2 unit 0 family inet address 10.10.10.2/24
Note
  • If you want to apply MPLS family configurations on the AF interfaces, you can apply the command set interfaces af-name unit logical-unit-number family mpls on both the GNFs between which the AF interface is configured.

  • For sample AF configurations, see Sample Configuration for Junos Node Slicing.

Class of Service on Abstracted Fabric Interfaces

Class of service (CoS) packet classification assigns an incoming packet to an output queue based on the packet’s forwarding class. See CoS Configuration Guide   for more details.

The following sections explain the forwarding class- to-queue mapping, and the behavior aggregate (BA) classifiers and rewrites supported on the Abstracted Fabric (AF) interfaces.

Forwarding Class-to-Queue Mapping

An AF interface is a simulated WAN interface with most capabilities of any other interface except that the traffic designated to a remote Packet Forwarding Engine will still have to go over the two fabric queues (Low/High priority ones).

Note

Presently, the AF interface operates in 2-queue mode only. Hence, all queue-based features such as scheduling, policing, and shaping are not available on an AF interface.

Packets on the AF interface inherit the fabric queue that is determined by the fabric priority configured for the forwarding class to which that packet belongs. For example, see the following forwarding class to queue map configuration:

[edit]

user@router# show class-of-service forwarding-classes

As shown in the preceding example, when a packet gets classified to the forwarding class VoiceSig, the code in the forwarding path examines the fabric priority of that forwarding class and decides which fabric queue to choose for this packet. In this case, high-priority fabric queue is chosen.

BA Classifiers and Rewrites

The behavior aggregate (BA) classifier maps a class-of-service (CoS) value to a forwarding class and loss priority. The forwarding class and loss-priority combination determines the CoS treatment given to the packet in the router. The following BA classifiers and rewrites are supported:

  • Inet-Precedence classifier and rewrite

  • DSCP classifier and rewrite

  • MPLS EXP classifier and rewrite

    You can also apply rewrites for IP packets entering the MPLS tunnel and do a rewrite of both EXP and IPv4 type of service (ToS) bits. This approach will work as it does on other normal interfaces.

  • DSCP v6 classifier and rewrite for IP v6 traffic

Note

The following are not supported:

  • IEEE 802.1 classification and rewrite

  • IEEE 802.1AD (QinQ) classification and rewrite

See CoS Configuration Guide   for details on CoS BA classifiers.

SNMP Trap Support: Configuring NMS Server

The Juniper Device Manager (JDM) supports the following SNMP traps:

  • LinkUp and linkDown traps for JDM interfaces.

    Standard linkUp/linkDown SNMP traps are generated. A default community string jdm is used.

  • LinkUp/linkDown traps for host interfaces.

    Standard linkUp/linkDown SNMP traps are generated. A default community string host is used.

  • JDM to JDM connectivity loss/regain traps.

    JDM to JDM connectivity loss/regain traps are sent using generic syslog traps (jnxSyslogTrap) through the host management interface.

    The JDM connectivity down trap JDM_JDM_LINK_DOWN is sent when the JDM is not able to communicate with the peer JDM on another server over cb0 or cb1 links. See the following example:

    The JDM to JDM Connectivity up trap JDM_JDM_LINK_UP is sent when either the cb0 or cb1 link comes up, and JDMs on both the servers are able to communicate again. See the following example:

  • VM(GNF) up/down—libvirtGuestNotif notifications.

    For GNF start/shutdown events, the standard libvirtGuestNotif notifications are generated. For libvirtMIB notification details, see this web page. Also, see the following example:

SNMP traps are sent to the target NMS server. To configure the target NMS server details in the JDM, see the following example:

[edit]

root@jdm# show snmp | display set
root@jdm# set snmp name name
root@jdm# set snmp description description
root@jdm# set snmp location location
root@jdm# set snmp contact user's email
root@jdm# set snmp trap-group tg-1 targets target ip address1
root@jdm# set snmp trap-group tg-1 targets target ip address2

Sample Configuration for Junos Node Slicing

This section provides sample configurations for Junos Node Slicing.

Sample JDM Configuration

Sample BSYS Configuration with Abstracted Fabric (AF) Interface

Sample AF Configuration at GNF with Class of Service

Assume that there is an AF interface between GNF1 and GNF2. The following sample configuration illustrates how to apply rewrites on the AF interface at GNF1 and apply classifiers on the AF interface on GNF2, in a scenario where traffic comes from GNF1 to GNF2:

GNF1 Configuration

GNF2 Configuration

Sample Output for Abstracted Fabric (AF) Interface State at a GNF

user@router-gnf-b> show interfaces af1