ON THIS PAGE
Configuring an MX Series Router to Operate in BSYS Mode (External Server Model)
Installing JDM RPM Package on x86 Servers Running RHEL (External Server Model)
Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04 (External Server Model)
Configuring Abstracted Fabric Interfaces Between a Pair of GNFs
SNMP Trap Support: Configuring NMS Server (External Server Model)
Setting Up Junos Node Slicing
Before proceeding to perform the Junos node slicing setup tasks, if you are using the external server model, you must have completed the procedures described in the chapter Preparing for Junos Node Slicing Setup.
Configuring an MX Series Router to Operate in BSYS Mode (External Server Model)
Ensure that the MX Series router is connected to the x86 servers as described in Connecting the Servers and the Router.
Junos node slicing requires the MX Series router to function as the base system (BSYS).
Use the following steps to configure an MX Series router to operate in BSYS mode:
- Install the Junos OS package for BSYS on both the Routing
Engines of the MX Series router.
To download the package:
Go to the Juniper Support page.
Click Base System > Junos OS version number > Junos version number (64-bit High-End).
On the Software Download page, select the I Agree option under End User License Agreement and then click Proceed.
- On the MX Series router, run the show chassis hardware command and verify that the transceivers on both the Control Boards
(CBs) are detected. The following text represents a sample output:
root@router> show chassis hardware
… CB 0 REV 23 750-040257 CABL4989 Control Board Xcvr 0 REV 01 740-031980 ANT00F9 SFP+-10G-SR Xcvr 1 REV 01 740-031980 APG0SC3 SFP+-10G-SR CB 1 REV 24 750-040257 CABX8889 Control Board Xcvr 0 REV 01 740-031980 AP41BKS SFP+-10G-SR Xcvr 1 REV 01 740-031980 ALN0PCM SFP+-10G-SR
- On the MX Series router, apply the following configuration
statements:
root@router# set chassis network-slices guest-network-functions
root@router# set chassis redundancy graceful-switchover
root@router# set chassis network-services enhanced-ip
root@router# set routing-options nonstop-routing
root@router# set system commit synchronize
root@router# commit
Note On MX960 routers, you must configure the network-services mode as enhanced-ip or enhanced-ethernet. On MX2020 routers, the enhanced-ip configuration statement is already enabled by default .
The router now operates in BSYS mode.
A router in the BSYS mode is not expected to run features other than the ones required to run the basic management functionalities in Junos node slicing. For example, the BSYS is not expected to have interface configurations associated with the line cards installed in the system. Instead, guest network functions (GNFs) will have the full-fledged router configurations.
Installing JDM RPM Package on x86 Servers Running RHEL (External Server Model)
Before installing the JDM RPM package for x86 servers, ensure that you have installed the additional packages, as described in Installing Additional Packages for JDM.
Download and install the JDM RPM package for x86 servers running RHEL as follows:
To download the package:
Go to the Juniper Support page.
Click JDM > Junos OS version number > Juniper Device Manager version number (for Redhat).
On the Software Download page, select the I Agree option under the End User License Agreement and then click Proceed.
To install the package on x86 servers running RHEL, perform the following steps on each of the servers:
- Disable SELINUX and reboot the server. You can disable
SELINUX by setting the value for SELINUX to disabled in the
/etc/selinux/config
file. - Install the JDM RPM package (indicated by the .rpm extension) by using the following command. An example
of the JDM RPM package used is shown below:
root@Linux Server0# rpm -ivh jns-jdm-1.0-0-17.4R1.13.x86_64.rpm
Preparing... ################################# [100%] Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log Updating / installing... 1:jns-jdm-1.0-0 ################################# [100%] Setup host for jdm... Launch libvirtd in listening mode Done Setup host for jdm Installing /juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz... Configure /juniper/lxc/jdm/jdm1/rootfs... Configure /juniper/lxc/jdm/jdm1/rootfs DONE Created symlink from /etc/systemd/system/multi-user.target.wants/jdm.service to /usr/lib/systemd/system/jdm.service. Done Setup jdm Redirecting to /bin/systemctl restart rsyslog.service
Repeat the steps for the second server.
Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04 (External Server Model)
Before installing the JDM Ubuntu package for x86 servers, ensure that you have installed the additional packages. For more details, see Installing Additional Packages for JDM.
Download and install the JDM Ubuntu package for x86 servers running Ubuntu 16.04 as follows:
To download the JDM Ubuntu package:
Go to the Juniper Support page.
Click JDM > Junos OS version number > Juniper Device Manager version number (for Debian).
On the Software Download page, select the I Agree option under the End User License Agreement and then click Proceed.
To install the JDM package on the x86 servers running Ubuntu 16.04, perform the following steps on each of the servers:
- Disable apparmor and reboot the server.
root@Linux Server0# systemctl stop apparmor
root@Linux Server0# systemctl disable apparmor
root@Linux Server0# reboot
- Install the JDM Ubuntu package (indicated by the .deb extension) by using the following command. An example
of the JDM Ubuntu package used is shown below:
root@Linux Server0# dpkg -i jns-jdm-1.0-0-17.4R1.13.x86_64.deb Selecting previously unselected package jns-jdm. (Reading database ... 71846 files and directories currently installed.) Preparing to unpack jns-jdm-1.0-0-17.4R1.13.x86_64.deb ... Unpacking jns-jdm (1.0-0) ... Setting up jns-jdm (1.0-0) ... Installing /juniper/.tmp-jdm-install/juniper_ubuntu_latest.tgz... Configure /juniper/lxc/jdm/jdm1/rootfs... Configure /juniper/lxc/jdm/jdm1/rootfs DONE Done Setup jdm Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (225-1ubuntu9) ...
Repeat the steps for the second server.
Configuring JDM on the x86 Servers (External Server Model)
Use the following steps to configure JDM on each of the x86 servers.
- At each server, start the JDM, and assign identities for
the two servers as server0 and server1, respectively,
as follows:
On one server, run the following command:
root@Linux server0# jdm start server=0
Starting JDM
On the other server, run the following command:
root@Linux server1# jdm start server=1
Starting JDM
Note The identities, once assigned, cannot be modified without uninstalling the JDM and then reinstalling it:
- Enter the JDM console on each server by running the following
command:
root@Linux Server0# jdm console
Connected to domain jdm Escape character is ^] * Starting Signal sysvinit that the rootfs is mounted [ OK ] * Starting Populate /dev filesystem [ OK ] * Starting Populate /var filesystem [ OK ] * Stopping Send an event to indicate plymouth is up [ OK ] * Stopping Populate /var filesystem [ OK ] * Starting Clean /tmp directory [ OK ] … jdm login:
- Log in as the root user.
- Enter the JDM CLI by running the following command:
root@jdm% cli
Note The JDM CLI is similar to the Junos OS CLI.
- Set the root password for the JDM.
root@jdm# set system root-authentication plain-text-password
New Password:
Note The JDM root password must be the same on both the servers.
Starting in Junos OS Release 18.3R1, you can create non-root users in JDM. For more information, see Configuring Non-Root Users in JDM (Junos Node Slicing).
JDM installation blocks libvirt port access from outside the host.
- Commit the changes:
root@jdm# commit
- Enter Ctrl-] to exit from the JDM console.
- From the Linux host, run the ssh jdm command to log in to the JDM shell.
Configuring Non-Root Users in JDM (Junos Node Slicing)
In the external server model, you can create non-root users on Juniper Device Manager (JDM) for Junos node slicing, starting in Junos OS Release 18.3R1. You need a root account to create a non-root user. The non-root users can log in to JDM by using the JDM console or through SSH. Each non-root user is provided a username and assigned a predefined login class.
The non-root users can perform the following functions:
Interact with JDM.
Orchestrate and manage Guest Network Functions (GNFs).
Monitor the state of the JDM, the host server and the GNFs by using JDM CLI commands.
The non-root user accounts function only inside JDM, not on the host server.
To create non-root users in JDM:
- Log in to JDM as a root user.
- Define a user name and assign the user with a predefined
login class.
root@jdm# set system login user username class predefined-login-class
- Set the password for the user.
root@jdm# set system login user username authentication plain-text-password
New Password:
- Commit the changes.
root@jdm# commit
Table 1 contains the predefined login classes that JDM supports for non-root users:
Table 1: Predefined Login Classes
Login Class | Permissions |
---|---|
super-user |
|
operator |
|
read-only | Similar to operator class, except that the users cannot restart daemons inside JDM. |
unauthorized | Ping and traceroute operations. |
Configuring JDM interfaces (External Server Model)
In the JDM, you must configure:
The two 10-Gbps server ports that are connected to the MX Series router.
The server port to be used as the JDM management port.
The server port to be used as the GNF management port.
Therefore, you need to identify the following on each server before starting the configuration of the ports:
The server interfaces (for example, p3p1 and p3p2) that are connected to CB0 and CB1 on the MX Series router.
The server interfaces (for example, em2 and em3) to be used for JDM management and GNF management.
For more information, see the figure Connecting the Servers and the Router.
You need this information for both server0 and server1.
These interfaces are visible only on the Linux host.
To configure the x86 server interfaces in JDM, perform the following steps on both the servers:
- On server0, apply
the following configuration statements:
root@jdm# set groups server0 server interfaces cb0 p3p1
root@jdm# set groups server0 server interfaces cb1 p3p2
root@jdm# set groups server1 server interfaces cb0 p3p1
root@jdm# set groups server1 server interfaces cb1 p3p2
root@jdm# set apply-groups [ server0 server1 ]
root@jdm# commit
root@jdm# set groups server0 server interfaces jdm-management em2
root@jdm# set groups server0 server interfaces vnf-management em3
root@jdm# set groups server1 server interfaces jdm-management em2
root@jdm# set groups server1 server interfaces vnf-management em3
root@jdm# commit
- Repeat the step 1 on server1.
Note Ensure that you apply the same configuration on both server0 and server1.
- Share the ssh identities between the two x86
servers.
At both server0 and server1, run the following JDM CLI command:
root@jdm> request server authenticate-peer-server
Note The request server authenticate-peer-server command displays a CLI message requesting you to log in to the peer server using ssh to verify the operation. To log in to the peer server, you need to prefix ip netns exec jdm_nv_ns to ssh root@jdm-server1.
For example, to log in to the peer server from server0, exit the JDM CLI, and use the following command from JDM shell:
root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server1
Similarly, to log in to the peer server from server1, use the following command:
root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server0
- Apply the configuration statements in the JDM CLI configuration
mode to set the JDM management
IP address, default route, and the
JDM hostname for each JDM instance as shown in the following
example.
Note The management IP address and default route must be specific to your network.
JDM does not support IPv6, even though IPv6 addresses are themselves configurable.
root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 10.216.105.112/21
root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 10.216.105.113/21
root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server0 system host-name test-jdm-server0
root@jdm# set groups server1 system host-name test-jdm-server1
root@jdm# commit
Note jmgmt0 stands for the JDM management port. This is different from the Linux host management port. Both JDM and the Linux host management ports are independently accessible from the management network.
You must have done the ssh key exchange as described in the Step 3 before attempting the Step 4. If you attempt the Step 4 without completing the Step 3, the system displays an error message as shown in the following example:
Failed to fetch JDM software version from server1. If authentication of peer server is not done yet, try running request server authenticate-peer-server.
- Run the following JDM CLI command on each server and ensure
that all the interfaces are up.
root@jdm> show server connections
Component Interface Status Comments Host to JDM port virbr0 up Physical CB0 port p3p1 up Physical CB1 port p3p2 up Physical JDM mgmt port em2 up Physical VNF mgmt port em3 up JDM-GNF bridge bridge_jdm_vm up CB0 cb0 up CB1 cb1 up JDM mgmt port jmgmt0 up JDM to HOST port bme1 up JDM to GNF port bme2 up JDM to JDM link0* cb0.4002 up JDM to JDM link1 cb1.4002 up
For sample JDM configurations, see Sample Configuration for Junos Node Slicing.
If you want to modify the server interfaces configured in the JDM, perform the following steps:
- Stop all running GNFs.
root@jdm> request virtual-network-functions gnf-name stop
- From the configuration mode, deactivate the virtual network
functions configuration, and
then commit the change.
root@jdm# deactivate virtual-network-functions
root@jdm# commit
- Configure and commit the new interfaces as described in the step 1 of the main procedure.
- Reboot the JDM from the shell.
root@jdm:~# reboot
- From the configuration mode, activate the virtual network
functions configuration, and
then commit the change.
root@jdm# activate virtual-network-functions
root@jdm# commit
Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs. To know more, see Assigning MAC Addresses to GNF.
Configuring MX Series Router to Operate in In-Chassis Mode
To configure in-chassis Junos node slicing, the MX Series router must have one of the following types of Routing Engines installed:
RE-S-2X00x6-128 (used in MX480 and MX960 routers)
RE-MX200X8-128G (used in MX2010 and MX2020 routers)
REMX2008-X8-128G (used in MX2008 routers)
In in-chassis model, the base system (BSYS), Juniper Device Manager (JDM), and all guest network functions (GNFs) run within the Routing Engine of the MX Series router. BSYS and GNFs run on the host as virtual machines (VMs). You need to first reduce the resource footprint of the standalone MX Series router as follows:
- Ensure that both the Routing Engines (re0 and re1) in the MX Series router have the required VM host package (example: junos-vmhost-install-mx-x86-64-19.2R1.tgz) installed. The VM host package should be of 19.1R1 or a later version.
- Applying the following configuration and then reboot VM
host on both the Routing Engines (re0 and re1).
user@router# set vmhost resize vjunos compact
user@router# set system commit synchronize
user@router> request vmhost reboot (re0|re1)
When this configuration is applied, and following the reboot, the Routing Engine resource footprint of the Junos VM on MX Series router shrinks in order to accommodate GNF VMs. A resized Junos VM, now operating as the base system (BSYS) on the MX Series Routing Engine has the following resources:
CPU Cores—1 (Physical)
DRAM—16GB
Storage—14GB (/var)
All files in the /var/
location, including the log files (/var/log
) and core files (/var/crash
), are
deleted when you reboot VM host after configuring the set vmhost
resize vjunos compact statement. You must save any files currently
in /var/log
or /var/crash
before proceeding with the VM host resize configuration if you want
to use them for reference.
Installing and Configuring JDM for In-Chassis Model
Steps listed in this topic apply only to in-chassis Junos node slicing configuration.
Installing JDM RPM Package on MX Series Router (In-Chassis Model)
Before installing the Juniper Device Manager (JDM) RPM package on an MX Series router, you must configure the MX Series router to operate in the in-chassis BSYS mode. For more information, see Configuring MX Series Router to Operate in In-Chassis Mode.
The RPM package jns-jdm-vmhost is meant for in-chassis Junos node slicing deployment, while the RPM package jns-jdm is used for external servers based Junos node slicing deployment.
- Download the JDM RPM package from the Juniper Support page.
- Install the JDM RPM package on both Routing Engines (re0
and re1), by using the command shown in the following example:
root@router> request vmhost jdm add jns-jdm-vmhost-18.3-20180930.0.x86_64.rpm
Starting to validate the Package Finished validating the Package Starting to validate the Environment Finished validating the Environment Starting to copy the RPM package from Admin Junos to vmhost Finished Copying the RPM package from Admin Junos to vmhost Starting to install the JDM RPM package Preparing... ################################################## Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log jns-jdm-vmhost ################################################## Setup host for jdm... Done Setup host for jdm Installing /vm/vm/iapps/jdm/install/juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz... Configure /vm/vm/iapps/jdm/install/juniper/lxc/jdm/jdm1/rootfs... Configure /vm/vm/iapps/jdm/install/juniper/lxc/jdm/jdm1/rootfs DONE Setup Junos cgroups...Done Done Setup jdm stopping rsyslogd ... done starting rsyslogd ... done Finished installing the JDM RPM package Installation Successful ! Starting to generate the host public keys at Admin Junos Finished generating the host public keys at Admin Junos Starting to copy the host public keys from Admin Junos to vmhost Finished copying the host public keys from Admin Junos to vmhost Starting to copy the public keys of Admin junos from vmhost to JDM Finished copying the public keys of Admin junos from vmhost to JDM Starting to cleanup the temporary file from Vmhost containing host keys of Admin Junos Finished cleaning the temporary file from Vmhost containing host keys of Admin Junos
- Run the show vmhost status command to see the vJunos Resource Status on both the Routing Engines.
user@router> show vmhost status re0
bsys-re0: -------------------------------------------------------------------------- Compute cluster: rainier-re-cc Compute Node: rainier-re-cn, Online vJunos Resource Status: Compact
user@router> show vmhost status re1
bsys-re1: -------------------------------------------------------------------------- Compute cluster: rainier-re-cc Compute Node: rainier-re-cn, Online vJunos Resource Status: Compact
Configuring JDM (In-Chassis Model)
Use the following steps to configure JDM on both the Routing Engines of an MX Series router:
- Apply the following command on both the Routing Engines
to start JDM:
user@router> request vmhost jdm start
Starting JDM Starting jdm: Domain jdm defined from /vm/vm/iapps/jdm//install/juniper/lxc/jdm/current/config/jdm.xml Domain jdm started
Starting in Junos OS 19.3R1, the JDM console does not display the message 'Domain JDM Started'. However, this message will be added to the system logs when the JDM is started.
Note If hyperthreading is disabled, a warning is displayed when you enter the command request vmhost jdm start, as shown in the following example:
Warning: Hyperthreading is disabled! Cores: (6) Processors: (6) Expected: (12)
- Use the command show vmhost jdm status to check
if the JDM is running.
user@router> show vmhost jdm status
JDM Information --------------------------- Package : jns-jdm-vmhost-19.1-B2.x86_64 Status : Running PID : 3088 Free Space : 62967 (MiB)
- After a few seconds, log in to JDM.
root@router> request vmhost jdm login
**************************************************************************** * The Juniper Device Manager (JDM) must only be used for orchestrating the * * Virtual Machines for Junos Node Slicing * * * * Host Linux Distro: Wind River Linux * * JDM Version: jns-jdm-vmhost-19.1-20181003.dev.common.0.x86_64 * * Free Disk Space on JDM's root-fs ("/"): 125081(MiB) * **************************************************************************** Last login: Thu Oct 4 15:26:30 2018 from 192.168.1.1
Note You need to have root user privilege on the BSYS to log in to JDM.
The in-chassis JDM root account password can be different from Junos root account password.
It takes approximately 10 seconds for JDM to start. If you enter the request vmhost jdm login command before JDM starts, you might get the following message:
ssh_exchange_identification: read: Connection reset by peer
- Enter the JDM CLI by running the following command:
root@jdm% cli
- In configuration mode, apply the configurations shown
in the following example:
Note The IP addresses shown in the following example are samples. Replace them with the actual IP addresses in your configuration.
root@jdm# set groups server0 system host-name host-name
root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 192.0.2.1/24
root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 192.0.2.2
root@jdm# set groups server1 system host-name host-name
root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 198.51.100.1/24
root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 198.51.100.2
- In configuration mode, set the root password for the JDM
on both the Routing Engines, and commit.
root@jdm# set apply-groups [server0 server1]
root@jdm# set system root-authentication plain-text-password
New password:
root@jdm# commit
Note The JDM supports root user administration account only.
- In operation mode, enter the following command on both
the Routing Engines to copy the ssh public key to the peer JDM.
root@jdm> request server authenticate-peer-server
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@jdm-server1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@jdm-server1'" and check to make sure that only the key(s) you wanted were added.
Note You need to enter the root password of the peer JDM when prompted.
- In the configuration mode, apply the following commands:
root@jdm# set system commit synchronize
In in-chassis Junos node slicing, you cannot ping or send traffic between the management interfaces of the same Routing Engine (for example, from the Routing Engine 0 of GNF1 to the Routing Engine 0 of GNF2 or from the Routing Engine 0 of GNF1 to JDM).
In in-chassis mode, you cannot perform an scp operation between the BSYS and the JDM management interfaces.
You must have done the ssh key exchange as described in the Step 7 before attempting the Step 8. If you attempt the Step 8 without completing the Step 7, the system displays an error message as shown in the following example:
Failed to fetch JDM software version from server1. If authentication of peer server is not done yet, try running request server authenticate-peer-server.
Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs. To know more, see Assigning MAC Addresses to GNF.
Assigning MAC Addresses to GNF
Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs.
To receive the globally unique MAC address range for the GNFs, contact your Juniper Networks representative and provide your GNF license SSRN (Software Support Reference Number), which will have been shipped to you electronically upon your purchase of the GNF license. To locate the SSRN in your GNF license, refer to the Juniper Networks Knowledge Base article KB11364.
For each GNF license, you will then be provided an ‘augmented SSRN’, which includes the globally unique MAC address range assigned by Juniper Networks for that GNF license. You must then configure this augmented SSRN at the JDM CLI as follows:
root@jdm# set system vnf-license-supplement vnf-id gnf-id license-supplement-string augmented-ssrn-string
root@jdm# commit
Note An augmented SSRN must be used for only one GNF ID. In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). GNF ID is one of its attributes. Attributes of a VNF are fully described in the follow-on section Configuring Guest Network Functions.
By default, the augmented SSRN will be validated. Should you ever need to skip this validation, you can use the no-validate attribute in the CLI as follows: Example: set system vnf-license-supplement vnf-id gnf-id license-supplement-string augmented-ssrn-string [no-validate].
You can configure the augmented SSRN for a GNF ID only when the GNF is not operational and has not yet been provisioned as well. You must first configure the augmented SSRN for a GNF ID before configuring the GNF.
Ensure that the GNF ID for which the augmented SSRN is being configured has not already been provisioned. If the GNF ID is already provisioned, you must first delete the GNF for that GNF ID on both the servers (in case of the external server model) or on both the Routing Engines (in case of the in-chassis Junos node slicing model) before configuring the augmented SSRN.
Analogously, you must first delete the GNF for a given GNF ID on both the servers (in case of the external server model) or on both the Routing Engines (in case of the in-chassis Junos node slicing model) before deleting the augmented SSRN for the GNF ID.
You cannot apply an augmented SSRN to a GNF that is based on Junos OS 19.1R1 or older.
To confirm that the assigned MAC address range for a GNF has been applied, when the GNF becomes operational, use the Junos CLI command show chassis mac-addresses - the output will match a substring of the augmented SSRN.
Configuring Guest Network Functions
Configuring a guest network function (GNF) comprises two tasks, one to be performed at the BSYS and the other at the JDM.
Before attempting to create a GNF, you must ensure that the servers (or Routing Engines in the case of in-chassis model) have sufficient resources (CPU, memory, storage) for that GNF.
You need to assign an ID to each GNF. This ID must be the same at the BSYS and the JDM.
At the BSYS, specify a GNF by assigning it an ID and a set of line cards by applying the configuration as shown in the following example:
user@router# set chassis network-slices guest-network-functions gnf 1 fpcs 4
user@router# commit
In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). A VNF has the following attributes:
A VNF name.
A GNF ID. This ID must be the same as the GNF ID used at the BSYS.
The MX Series platform type.
A Junos OS image to be used for the GNF.
The VNF server resource template.
At the JDM, to configure a VNF, perform the following steps:
- Use the JDM shell command scp to retrieve the
Junos OS Node Slicing image for GNF and place it in the JDM local
directory
/var/jdm-usr/gnf-images
(repeat this step to retrieve the GNF configuration file).root@jdm:~# scp source-location-of-the-gnf-image /var/jdm-usr/gnf-images
root@jdm:~# scp source-location-of-the-gnf-configuration-file /var/jdm-usr/gnf-config
- Assign this image to a GNF by using the JDM CLI command
as shown in the following example:
root@test-jdm-server0> request virtual-network-functions test-gnf add-image /var/jdm-usr/gnf-images/junos-install-ns-mx-x86-64-17.4R1.10.tgz all-servers
Server0: Added image: /vm-primary/test-gnf/test-gnf.img
Server1: Added image: /vm-primary/test-gnf/test-gnf.img - Configure the VNF by applying the configuration statements
as shown in the following example:
root@test-jdm-server0# set virtual-network-functions test-gnf id 1
root@test-jdm-server0# set virtual-network-functions test-gnf chassis-type mx2020
root@test-jdm-server0# set virtual-network-functions test-gnf resource-template 2core-16g
root@test-jdm-server0# set system vnf-license-supplement vnf-id 1 license-supplement-string RTU00023003204-01-AABBCCDDEE00-1100-01-411C
For in-chassis model, do not configure the platform type (set virtual-network-functions test-gnf chassis-type mx2020). It will be detected automatically.
Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs. To know more, see Assigning MAC Addresses to GNF.
To also specify a baseline or initial Junos OS configuration for a GNF, prepare the GNF configuration file (example:
/var/jdm-usr/gnf-config/test-gnf.conf
) on both the servers (server0 and server1) for external server model, and on both the Routing Engines (re0 and re1) for the in-chassis model, and specify the filename as the parameter in the base-config statement as shown below:root@test-jdm-server0# set virtual-network-functions test-gnf base-config /var/jdm-usr/gnf-config/test-gnf.conf
root@test-jdm-server0# commit synchronize
Note Ensure that:
You use the same GNF ID as the one specified earlier in BSYS.
The baseline configuration filename (with the path) is the same on both the servers / Routing Engines.
The syntax of the baseline file contents is in the Junos OS configuration format.
The GNF name used here is the same as the one assigned to the Junos OS image for GNF in the step 2.
- To verify that the VNF is created, run the following JDM
CLI command:
root@test-jdm-server0> show virtual-network-functions test-gnf
- Log in to the console of the VNF by issuing the following
JDM CLI command:
root@test-jdm-server0> request virtual-network-functions test-gnf console
Note Remember to log out of the VNF console after your have completed your configuration tasks. We recommend that you set an idle time-out using the command set system login idle-timeout minutes. Otherwise, if a user forgets to log out of the VNF console session, another user can log in without providing the access credentials. For more information, see system login (Junos Node Slicing).
- Configure the VNF the same way as you configure an MX Series Routing Engine.
The CLI prompt for in-chassis model is root@jdm# .
For sample configurations, see Sample Configuration for Junos Node Slicing.
In the case of the external server model, if you had previously brought down any physical x86 CB interfaces or the GNF management interface from Linux shell (by using the command ifconfig interface-name down), these will automatically be brought up when the GNF is started.
Configuring Abstracted Fabric Interfaces Between a Pair of GNFs
Creating an Abstracted Fabric (af) interface between two guest network functions (GNFs) involves configurations both at the base system (BSYS) and at the GNF. Abstracted Fabric interfaces are created on GNFs based on the BSYS configuration, which is then sent to those GNFs.
Only one af interface can be configured between a pair of GNFs.
To configure af interfaces between a pair of GNFs:
- At the BSYS, apply the configuration as shown in the following
example:
user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf id 4
user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf af2
user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf id 2
user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf af4
In this example, af2 is the Abstracted Fabric interface instance 2 and af4 is the Abstracted Fabric interface instance 4.
Note The allowed af interface values range from af0 through af9.
The GNF af interface will be visible and up. You can configure an af interface the way you configure any other interface.
- At the GNF, apply the configuration as shown in the following
example:
user@router-gnf-b# set interfaces af4 unit 0 family inet address 10.10.10.1/24
user@router-gnf-d# set interfaces af2 unit 0 family inet address 10.10.10.2/24
If you want to apply MPLS family configurations on the af interfaces, you can apply the command set interfaces af-name unit logical-unit-number family mpls on both the GNFs between which the af interface is configured.
For sample af configurations, see Sample Configuration for Junos Node Slicing.
Class of Service on Abstracted Fabric Interfaces
Class of service (CoS) packet classification assigns an incoming
packet to an output queue based on the packet’s forwarding class.
See CoS Configuration Guide
The following sections explain the forwarding class- to-queue mapping, and the behavior aggregate (BA) classifiers and rewrites supported on the Abstracted Fabric (af) interfaces.
Forwarding Class-to-Queue Mapping
An af interface is a simulated WAN interface with most capabilities of any other interface except that the traffic designated to a remote Packet Forwarding Engine will still have to go over the two fabric queues (Low/High priority ones).
Presently, an af interface operates in 2-queue mode only. Hence, all queue-based features such as scheduling, policing, and shaping are not available on an af interface.
Packets on the af interface inherit the fabric queue that is determined by the fabric priority configured for the forwarding class to which that packet belongs. For example, see the following forwarding class to queue map configuration:
[edit]
user@router# show class-of-service forwarding-classes
class Economy queue-num 0 priority low; /* Low fabric priority */ class Stream queue-num 1; class Business queue-num 2; class Voice queue-num 3; class NetControl queue-num 3; class Business2 queue-num 4; class Business3 queue-num 5; class VoiceSig queue-num 6 priority high; /* High fabric priority */ class VoiceRTP queue-num 7;
As shown in the preceding example, when a packet gets classified to the forwarding class VoiceSig, the code in the forwarding path examines the fabric priority of that forwarding class and decides which fabric queue to choose for this packet. In this case, high-priority fabric queue is chosen.
BA Classifiers and Rewrites
The behavior aggregate (BA) classifier maps a class-of-service (CoS) value to a forwarding class and loss priority. The forwarding class and loss-priority combination determines the CoS treatment given to the packet in the router. The following BA classifiers and rewrites are supported:
Inet-Precedence classifier and rewrite
DSCP classifier and rewrite
MPLS EXP classifier and rewrite
You can also apply rewrites for IP packets entering the MPLS tunnel and do a rewrite of both EXP and IPv4 type of service (ToS) bits. This approach will work as it does on other normal interfaces.
DSCP v6 classifier and rewrite for IP v6 traffic
The following are not supported:
IEEE 802.1 classification and rewrite
IEEE 802.1AD (QinQ) classification and rewrite
See CoS Configuration Guide
Optimizing Fabric Path for Abstracted Fabric Interface
You can optimize the traffic flowing over the abstracted fabric (af) interfaces between two guest network functions (GNFs), by configuring a fabric path optimization mode. This feature reduces fabric bandwidth consumption by preventing any additional fabric hop (switching of traffic flows from one Packet Forwarding Engine to another) before the packets eventually reach the destination Packet Forwarding Engine. Fabric path optimization, supported on MX2008, MX2010, and MX2020 with MPC9E and MX2K-MPC11E, prevents only a single additional traffic hop that results from abstracted fabric interface load balancing.
You can configure one of the following fabric path optimization modes:
monitor—If you configure this mode, the peer GNF monitors the traffic flow and sends information to the source GNF about the Packet Forwarding Engine to which the traffic is being forwarded currently and the desired Packet Forwarding Engine that could provide an optimized traffic path. In this mode, the source GNF does not forward the traffic towards the desired Packet Forwarding Engine.
optimize—If you configure this mode, the peer GNF monitors the traffic flow and sends information to the source GNF about the Packet Forwarding Engine to which the traffic is being forwarded currently and the desired Packet Forwarding Engine that could provide an optimized traffic path. The source GNF then forwards the traffic towards the desired Packet Forwarding Engine.
To configure a fabric path optimization mode, use the following CLI commands at BSYS.
user@router# set chassis network-slices guest-network-functions
gnf id af-name collapsed-forward (monitor | optimize)
user@router# commit
After configuring fabric path optimization, you can use the command show interfaces af-interface-name in GNF to view the number of packets that are currently flowing on the optimal / non-optimal path.
SNMP Trap Support: Configuring NMS Server (External Server Model)
The Juniper Device Manager (JDM) supports the following SNMP traps:
LinkUp and linkDown traps for JDM interfaces.
Standard linkUp/linkDown SNMP traps are generated. A default community string jdm is used.
LinkUp/linkDown traps for host interfaces.
Standard linkUp/linkDown SNMP traps are generated. A default community string host is used.
JDM to JDM connectivity loss/regain traps.
JDM to JDM connectivity loss/regain traps are sent using generic syslog traps (jnxSyslogTrap) through the host management interface.
The JDM connectivity down trap JDM_JDM_LINK_DOWN is sent when the JDM is not able to communicate with the peer JDM on another server over cb0 or cb1 links. See the following example:
{ SNMPv2c C=host { V2Trap(296) R=1299287309 .1.3.6.1.2.1.1.3.0=42761992 .1.3.6.1.6.3.1.1.4.1.0=.1.3.6.1.4.1.2636.4.12.0.1 .1.3.6.1.4.1.2636.3.35.1.1.1.2.1="JDM_JDM_LINK_DOWN" .1.3.6.1.4.1.2636.3.35.1.1.1.3.1="" .1.3.6.1.4.1.2636.3.35.1.1.1.4.1=5 .1.3.6.1.4.1.2636.3.35.1.1.1.5.1=24 .1.3.6.1.4.1.2636.3.35.1.1.1.6.1=0 .1.3.6.1.4.1.2636.3.35.1.1.1.7.1="jdmmon" .1.3.6.1.4.1.2636.3.35.1.1.1.8.1="JDM-HOST" .1.3.6.1.4.1.2636.3.35.1.1.1.9.1="JDM to JDM Connection Lost" .1.3.6.1.6.3.1.1.4.3.0.0=”” } }
The JDM to JDM Connectivity up trap JDM_JDM_LINK_UP is sent when either the cb0 or cb1 link comes up, and JDMs on both the servers are able to communicate again. See the following example:
{ SNMPv2c C=host { V2Trap(292) R=998879760 .1.3.6.1.2.1.1.3.0=42762230 .1.3.6.1.6.3.1.1.4.1.0=.1.3.6.1.4.1.2636.4.12.0.1 .1.3.6.1.4.1.2636.3.35.1.1.1.2.1="JDM_JDM_LINK_UP" .1.3.6.1.4.1.2636.3.35.1.1.1.3.1="" .1.3.6.1.4.1.2636.3.35.1.1.1.4.1=5 .1.3.6.1.4.1.2636.3.35.1.1.1.5.1=24 .1.3.6.1.4.1.2636.3.35.1.1.1.6.1=0 .1.3.6.1.4.1.2636.3.35.1.1.1.7.1="jdmmon" .1.3.6.1.4.1.2636.3.35.1.1.1.8.1="JDM-HOST" .1.3.6.1.4.1.2636.3.35.1.1.1.9.1="JDM to JDM Connection Up" .1.3.6.1.6.3.1.1.4.3.0.0="" } }
VM(GNF) up/down—libvirtGuestNotif notifications.
For GNF start/shutdown events, the standard libvirtGuestNotif notifications are generated. For libvirtMIB notification details, see this web page. Also, see the following example:
HOST [UDP: [127.0.0.1]:53568->[127.0.0.1]]: Trap , DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (636682) 1:46:06.82, SNMPv2-MIB::snmpTrapOID.0 = OID: LIBVIRT-MIB::libvirtGuestNotif, LIBVIRT-MIB::libvirtGuestName.0 = STRING: "gnf1", LIBVIRT-MIB::libvirtGuestUUID.1 = STRING: 7ad4bc2a-16db-d8c0-1f5a-6cb777e17cd8, LIBVIRT-MIB::libvirtGuestState.2 = INTEGER: running(1), LIBVIRT-MIB::libvirtGuestRowStatus.3 = INTEGER: active(1)
SNMP traps are sent to the target NMS server. To configure the target NMS server details in the JDM, see the following example:
[edit]
root@jdm# show snmp | display set
root@jdm# set snmp name name
root@jdm# set snmp description description
root@jdm# set snmp location location
root@jdm# set snmp contact user's email
root@jdm# set snmp trap-group tg-1 targets target ip address1
root@jdm# set snmp trap-group tg-1 targets target ip address2
JDM does not write any configuration to the host snmp configuration file (/etc/snmp/snmpd.conf). Hence, JDM installation and subsequent configuration do not have any impact on the host SNMP. The SNMP configuration CLI command in JDM is used only to configure the JDM’s snmpd.conf file which is present within the container. To generate linkUp/Down trap, you must manually include the configuration as shown in the following example in the host server’s snmpd.conf file (/etc/snmp/snmpd.conf):
createUser trapUser iquerySecName trapUser rouser trapUser defaultMonitors yes notificationEvent linkUpTrap linkUp ifIndex ifAdminStatus ifOperStatus ifDescr notificationEvent linkDownTrap linkDown ifIndex ifAdminStatus ifOperStatus ifDescr monitor -r 10 -e linkUpTrap "Generate linkUp" ifOperStatus != 2 monitor -r 10 -e linkDownTrap "Generate linkDown" ifOperStatus == 2 trap2sink <NMS-IP> host
In the above example, replace <NMS-IP> with the IP address of Network Management Station (NMS).
Chassis Configuration Hierarchy at BSYS and GNF
In Junos node slicing, the BSYS owns all the physical components of the router, including the line cards and fabric, while the GNFs maintain forwarding state on their respective line cards. In keeping with this split responsibility, Junos CLI configuration under the chassis hierarchy (if any), should be applied at the BSYS or at the GNF as follows:
Physical-level parameters under the chassis configuration hierarchy should be applied at the BSYS. For example, the configuration for handling physical errors at an FPC is a physical-level parameter, and should therefore be applied at the BSYS.
At BSYS Junos CLI:
[edit]
user@router# set chassis fpc fpc slot error major threshold threshold value action alarm
Logical or feature-level parameters under the chassis configuration hierarchy should be applied at the GNF associated with the FPC. For example, the configuration for max-queues per line card is a logical-level parameter, and should therefore be applied at the GNF.
At GNF Junos CLI:
[edit]
user@router# set chassis fpc fpc slot max-queues value
As exceptions, the following two parameters under the chassis configuration hierarchy should be applied at both BSYS and GNF:
At both BSYS and GNF CLI:
[edit]
user@router# set chassis network-services network services mode
user@router# set chassis fpc fpc slot flexible-queueing-mode
Sample Configuration for Junos Node Slicing
This section provides sample configurations for Junos node slicing.
Sample JDM Configuration (External Server Model)
Sample JDM Configuration (In-Chassis Model)
Sample BSYS Configuration with Abstracted Fabric Interface
Sample Abstracted Fabric Configuration at GNF with Class of Service
Assume that there is an Abstracted Fabric (af) interface between GNF1 and GNF2. The following sample configuration illustrates how to apply rewrites on the af interface at GNF1 and apply classifiers on the af interface on GNF2, in a scenario where traffic comes from GNF1 to GNF2:
GNF1 Configuration
GNF2 Configuration
Sample Output for Abstracted Fabric Interface State at a GNF
user@router-gnf-b> show interfaces af9
Physical interface: af9, Enabled, Physical link is Up Interface index: 209, SNMP ifIndex: 527 Type: Ethernet, Link-level type: Ethernet, MTU: 1514, Speed: 370000mbps Device flags : Present Running Interface flags: Internal: 0x4000 Link type : Full-Duplex Link flags : None Current address: 00:90:69:2b:00:4c, Hardware address: 00:90:69:2b:00:4c Last flapped : 2018-09-12 01:44:01 PDT (00:01:02 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Bandwidth : 370 Gbps Peer GNF id : 9 Peer GNF Forwarding element(FE) view : FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes 6:0 130 Up 0 0 12:0 120 Up 0 0 12:1 120 Up 0 0 Residual Transmit Statistics : Packets : 0 Bytes : 0 Fabric Queue Statistics : FPC slot:FE num High priority(pkts) Low priority(pkts) 6:0 0 0 12:0 0 0 12:1 0 0 FPC slot:FE num High priority(bytes) Low priority(bytes) 6:0 0 0 12:0 0 0 12:1 0 0 Residual Queue Statistics : High priority(pkts) Low priority(pkts) 0 0 High priority(bytes) Low priority(bytes) 0 0 Logical interface af9.0 (Index 332) (SNMP ifIndex 528) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 0 Output packets: 13 Protocol inet, MTU: 1500