Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Managing Virtual Network Functions Using JDM

Understanding Virtual Network Functions

Virtualized network functions (VNFs) include all virtual entities that can be launched and managed from the Juniper Device Manager (JDM). Currently, virtual machines (VMs) are the only VNF type that is supported.

There are several components in a JDM environment:

  • JDM—Manages the life cycle for all service VMs. JDM also provides a CLI with configuration persistence or the ability to use NETCONF for scripting and automation.

  • Primary Junos OS VM—A system VM that is the primary virtual device. This VM is always present when the system is running.

  • Other Junos OS VMs—These VMs are service VMs and are activated dynamically by an external controller. A typical example of this type of VM is a vSRX Virtual Firewall instance.

  • Third-party VNFs—JDM supports the creation and management of third-party VMs such as Ubuntu Linux VMs.

The JDM architecture provides an internal network that connects all VMs to the JDM as shown in Figure 1.

Figure 1: Network Connections Between JDM and the VMsNetwork Connections Between JDM and the VMs

The JDM can reach any VNF using an internal network (192.0.2.1/24).

Note:

Up to Junos OS Release 15.1X53-D470, the liveliness IP is in 192.168.1.0/24 subnet. In all later Junos OS Releases, the liveliness IP is in 192.0.2.0/24 subnet.

A VNF can own or share management ports and NIC ports in the system.

All VMs run in isolation and a state change in one VM does not affect another VM. When the system restarts, the service VMs are brought online as specified in the persistent configuration file. When you gracefully shut down the system, all VMs including the Junos VMs are shut down.

Table 1 provides a glossary of commonly used VNF acronyms and terms.

Table 1: VNF Glossary

Term

Definition

JCP

Junos Control Plane (also known as the primary Junos OS VM)

JDM

Juniper Device Manager

NFV

Network Functions Virtualization

VM

Virtual Machine

VNF

Virtualized Network Function

Prerequisites to Onboard Virtual Network Functions on NFX250 Devices

You can onboard and manage Juniper VNFs and third-party VNFs on NFX devices through the Junos Control Plane (JCP).

The number of VNFs that you can onboard on the device depends on the availability of system resources such as the number of CPUs and system memory.

Before you onboard the VNFs, it is recommended to check the available system resources such as CPUs, memory, and storage for VNFs. For more information, see Managing the VNF Life Cycle.

Prerequisites for VNFs

To instantiate VNFs, the NFX devices support:

  • KVM based hypervisor deployment

  • OVS or Virtio interface drivers

  • raw or qcow2 VNF file types

  • (Optional) SR-IOV

  • (Optional) CD-ROM and USB configuration drives

  • (Optional) Hugepages for memory requirements

Managing the VNF Life Cycle

You can use the JDM CLI to manage the VNF. Additionally, libvirt software offers extensive virtualization features. To ensure that you are not limited by the CLI, JDM provides an option to operate VNF using an XML descriptor file. Network Configuration Protocol (NETCONF) supports all VNF operations. Multiple VNFs can co-exist in a system and you can configure multiple VNFs using either an XML descriptor file or an image.

Note:

Ensure that VNF resources that are specified in the XML descriptor file do not exceed the available system resources.

This topic covers the life-cycle management of a VNF.

Planning Resources for a VNF

Purpose

Before launching a VNF, it is important to check the system inventory and confirm that the resources required by the VNF are available. The VNF must be designed and configured properly so that its resource requirements do not exceed the available capacity of the system.

Note:
  • The output of the show system inventory command displays only the current snapshot of system resource usage. When you start a VNF, the resource usage might be less than what was available when you installed the VNF package.

  • Before starting a VNF, you must check the system resource usage.

Note:

Some of the physical CPUs are reserved by the system. Except for the following physical CPUs, all others are available for user-defined VNFs:

Table 2 provides the list of physical CPUs that are reserved for NFX250-LS1.

Table 2: Physical CPU Allocation for NFX250-LS1

CPU Core

Allocation

0

Host, JDM, and JCP

4

Host bridge

7

IPSec

Table 3 provides the list of physical CPUs that are reserved for NFX250-S1, NFX250-S2, and NFX250-S1E devices.

Table 3: Physical CPU Allocation for NFX250

CPU Core

Allocation

0

Host, JDM, and JCP

6

Host bridge

7

IPSec

For more information, see the following:

Managing the VNF Image

To load a VNF image on the device from a remote location, use the file-copy command. Alternatively, you can use the NETCONF command file-put, to load a VNF image.

Note:

You must save the VNF image in the /var/third-party/images directory.

Preparing the Bootstrap Configuration

You can bootstrap a VNF by attaching either a CD or a USB storage device that contains a bootstrap-config ISO file.

A bootstrap configuration file must contain an initial configuration that allows the VNF to be accessible from an external controller, and accepts SSH, HTTP, or HTTPS connections from an external controller for further runtime configurations.

An ISO disk image must be created offline for the bootstrap configuration file as follows:

Launching a VNF

You can launch a VNF by configuring the VNF name, and specifying either the path to an XML descriptor file or to an image.

While launching a VNF with an image, two VNF interfaces are added by default. These interfaces are required for management and internal network. For those two interfaces, the target Peripheral Component Interconnect (PCI) addresses, such as 0000:00:03:0 and 0000:00:04:0 are reserved.

To launch a VNF using an XML descriptor file:

To launch a VNF using an image:

To specify a UUID for the VNF:

uuid is an optional parameter, and it is recommended to allow the system to allocate a UUID for the VNF.

Note:
  • You cannot change the init-descriptor or image configuration after saving and committing the init-descriptor and image configuration. To change the init-descriptor or image for a VNF, you must delete and create a VNF again.

  • Commit checks are applicable only for VNF configurations that are based on image specification through JDM CLI, and not for VNF configurations that are based on init-descriptor XML file.

Note:

For creating VNFs using image files, ensure the following:

  • You must use unique files for image, disk, USB that are used within a VNF or across VNFs except for an iso9660 type file, which can be attached to mutiple VNFs.

  • A file specified as image in raw format should be a block device with a partition table and a boot partition.

  • A file specified as image in qcow2 format should be a valid qcow2 file.

Allocating Resources for a VNF

This topic covers the process of allocating various resources to a VNF.

Specifying CPU for VNF

To specify the number of virtual CPUs that are required for a VNF, type the following command:

To pin a virtual CPU to a physical CPU, type the following command:

The physical CPU number can either be a number or a range. By default, a VNF is allocated with one virtual CPU that is not pinned to any physical CPU.

Note:

You cannot change the CPU configuration of a VNF when the VNF is in running state. Restart the VNF for changes to take effect.

To enable hardware-virtualization or hardware-acceleration for VNF CPUs, type the following command:

Allocating Memory for a VNF

To specify the maximum primary memory that the VNF can use, enter the following command:

By default, 1 GB of memory is allocated to a VNF.

Note:

You cannot change the memory configuration of a VNF if the VNF is in running state. Restart the VNF for changes to take effect.

To allocate hugepages for a VNF, type the following command:

page-size is an optional parameter. Possible values are 1024 for a page size of 1GB and 2 for a page size of 2 MB. Default value is 1024 hugepages.

Note:

Configuring hugepages is recommended only if the enhanced orchestration mode is enabled. If the enhanced orchestration mode is disabled and if VNF requires hugepages, the VNF XML descriptor file should contain the XML tag with hugepages configuration.

Note:

For VNFs that are created using image files, there is a maximum limit of the total memory that can be configured for all user-defined VNFs including memory based on hugepages and memory not based on hugepages.

Table 4 lists the maximum hugepage memory that can be reserved for the various NFX250 models.

Table 4: Recommended Hugepage Memory for the NFX250 Devices

Model

Memory

Maximum Hugepage Memory (GB)

Maximum Hugepage Memory (GB) for CSO-SDWAN

NFX250-S1

16 GB

8

-

NFX250-S1E

16 GB

8

13

NFX250-S2

32 GB

24

13

NFX250-LS1

16 GB

8

-

Configuring VNF Storage Devices

To add a virtual CD or to update the source file of a virtual CD, enter the following command:

To add a virtual USB storage device, enter the following command:

To attach an additional hard disk, enter the following command:

To delete a virtual CD, USB storage device, or a hard disk from the VNF, enter the following command:

Note:
  • After attaching or detaching a CD from a VNF, you must restart the device for changes to take effect. The CD detach operation fails if the device is in use within the VNF.

  • VNF supports one virtual CD, one virtual USB storage device, and multiple virtual hard disks.

  • You can update the source file in a CD or USB storage device while the VNF is in running state.

  • You must save the source file in the /var/third-party directory and the file must have read and write permission for all users.

Note:

For VNFs created using image files, ensure the following:

  • A file specified as a hard disk in raw format should be a block device with a partition table.

  • A file specified as a hard disk in qcow2 format should be a valid qcow2 file.

  • A file specified as USB should be a block device with a partition table, or an iso9660 type file.

  • A file specified as CD-ROM should be a block device of type iso9660.

  • If a VNF has an image specified with bus-type=ide, it should not have any device attached with name hda.

  • If a VNF has an image specified with bus-type=virtio, then it should not have any device attached with name vda.

Configuring VNF Interfaces and VLANs

You can create a VNF interface and attach it to a physical NIC port, management interface, or VLANs.

  1. To attach a VNF interface to a physical interface by using the SR-IOV virtual function:

    vlan-id is optional and it is the port VLAN ID.

  2. To create VLAN:
  3. To attach a VNF interface to a VLAN:
    Note:
    • The interfaces attached to the VNF are persistent across VNF restarts.

    • If the VNF supports hot-plugging, you can attach the interfaces when the VNF is in running state. Otherwise, add the interfaces, and then restart the VNF.

    • To map interfaces to VLAN, you must enable the memory features hugepages command option.

    • You cannot change the mapping of VNF interface when the VNF is in running state.

  4. To map virtual interfaces with physical interfaces:

    Mapping of virtual interfaces and physical interfaces (ge-0/0/n and xe-0/0/n) makes sure that the state of the virtual interface matches with the state of the physical interface to which it is mapped. For example, if a physical interface is down and virtual interface is up, the virtual interface will be brought down within 5 seconds of detection. One or more virtual interface can be mapped to one or more physical interfaces.

  5. To connect VNF interfaces to the internal management network:
    Note:

    Before connecting VNF interfaces to the internal management network, you must configure the VNFs by using the set virtual-network-function vnf-name no_default_interface command.

    Any of the VNF interfaces including eth0 and eth1 can have internal or out-of-band attribute management. However, only one VNF interface out of all interfaces that are connected can have either out-of-band-management or internal-management. You cannot specify both the attribute values to the same VNF interface. For example, eth5 can have management internal while eth0 can have management out-of-band.

  6. To specify the target PCI address for a VNF interface:

    You can use the target PCI address to rename or reorganize interfaces within the VNF.

    For example, a Linux-based VNF can use udev rules within the VNF to name the interface based on the PCI address.

    Note:
    • The target PCI-address string should be in the following format:

      0000:00:<slot:>:0, which are the values for domain:bus:slot:function. The slot should be different for each VNF interface. The values for domain, bus, and function should be zero.

    • You cannot change the target PCI-address of VNF interface when the VNF is in running state.

  7. To delete a VNF interface:
    Note:
    • To delete an interface, you must stop the VNF, delete the interface, and start the VNF.

    • After attaching or detaching a virtual function, you must restart the VNF for changes to take effect.

    • eth0 and eth1 are reserved for default VNF interfaces that are connected to the internal network and out-of-band management network. Therefore, the configurable VNF interface names start from eth2.

    • Within a VNF, the interface names can be different, based on guest OS naming convention. VNF interfaces that are configured in JDM might not appear in the same order within the VNF.

    • You must use the target PCI addresses to map to the VNF interfaces that are configured in JDM and name them accordingly.

Managing VNF States

By default, the VNF is autostarted on committing the VNF config.

  1. To disable an autostart of a VNF on a VNF config commit:
  2. To manually start a VNF:
  3. To stop a VNF:
  4. To restart a VNF:

Managing VNF MAC Addresses

VNF interfaces that are defined, either using a CLI or specified in an init-descriptor XML file, are assigned a globally-unique and persistent MAC address. A common pool of 64 MAC addresses is used to assign MAC addresses. You can configure a MAC address other than that available in the common pool, and this address will not be overwritten.

  1. To configure a specific MAC address for a VNF interface:
  2. To delete the MAC address configuration of a VNF interface:
Note:
  • To delete or modify the MAC address of a VNF interface, you must stop the VNF, make the necessary changes, and then start the VNF.

  • The MAC address specified for a VNF interface can be either a system MAC address or a user-defined MAC address.

  • The MAC address specified from the system MAC address pool must be unique for VNF interfaces.

Managing MTU

The maximum transmission unit (MTU) is the largest data unit that can be forwarded without fragmentation. You can configure either 1500 bytes or 2048 bytes as the MTU size. The default MTU value is 1500 bytes.

Note:

MTU configuration is supported only on VLAN interfaces.

  1. To configure MTU on a VNF interface:
    Note:

    You must restart the VNF after configuring MTU if the VNF does not support hot-plugging functionality.

  2. To delete MTU of a VNF interface:
    Note:

    After the deletion of MTU, the MTU of VNF interface is reset to 1500 bytes.

Note:
  • MTU size can be either 1500 bytes or 2048 bytes.

  • The maximum number of VLAN interfaces on the OVS that can be configured in the system is 20.

  • The maximum size of the MTU for a VNF interface is 2048 bytes.

Accessing a VNF from JDM

You can access a VNF from JDM using either SSH or a VNF console.

  1. To access a VNF using SSH:
  2. To access a VNF using a virtual console:
Note:
  • Use ctrl-] to exit the virtual console.

  • Do not use Telnet session to run the command.

Viewing List of VNFs

To view the list of VNFs:

The Liveliness output field of a VNF indicates whether the IP address of the VNF is reachable or not reachable from JDM. The default IP address of the liveliness bridge 192.0.2.1/24.

Displaying the VNF Details

To display VNF details:

Deleting a VNF

To delete a VNF:

Note:

The VNF image remains in the disk even after you delete the VNF.

Non-Root User Access for VNF Console

You can use Junos OS to create, modify, or delete VNF on the NFX Series routers.

Junos OS CLI allows the following management operations on VNFs:

Table 5: VNF Management Operations

Operation

CLI

start

request virtual-network-functions <vnf-name> start

stop

request virtual-network-functions <vnf-name> stop

restart

request virtual-network-functions <vnf-name> restart

console access

request virtual-network-functions <vnf-name> console [force]

ssh access

request virtual-network-functions <vnf-name> ssh [user-name <user-name>]

telnet access

request virtual-network-functions <vnf-name> ssh [user-name <user-name>]

Table 2 lists the user access permissions for the VNF management options:

Table 6: User Access Permissions for VNF Management Operations before Junos OS 24.1R1.

Operation

root class user

super-user class user

operator class user

read-only class user

start

command available and works

command available and works

command not available

command not available

stop

command available and works

command available and works

command not available

command not available

restart

command available and works

command available and works

command not available

command not available

console access

command available and works

command available; but not supported

command not available

command not available

ssh access

command available and works

command available; but not supported

command not available

command not available

telnet access

command available and works

command available; but not supported

command not available

command not available

Starting In Junos OS 24.1R1, Junos OS CLI allows the management operations on VNFs for a non-root user.

A new Junos OS user permission, vnf-operation allows the request virtual-network-functions CLI hierarchy available to Junos OS users that do not belong to the root and the super-user class.

You can add the user permission to a custom user class using the statement vnf-operation at [edit system login class custom-user permissions]

Table 3 lists the VNF management options available for a user belonging to a custom Junos OS user class with vnf-operation permission.

Table 7: User Access Permissions for VNF Management Operations after Junos OS 24.1R1.

Operation

root user

super-user class user

User of a custom Junos OS user class with vnf-operation permission

start

command available and works

command available and works

command available and works

stop

command available and works

command available and works

command available and works

restart

command available and works

command available and works

command available and works

console access

command available and works

command available and works

command available and works

ssh access

command available and works

command available and works

command available and works

Accessing the VNF Console

Starting in Junos OS 24.1R1, the following message is displayed when you access the console initially:

The messages Trying 192.168.1.1... and Connected to 192.168.1.1. come from telnet client that is launched using the Junos OS CLI command request virtual-network-functions <vnf-name> console.

Note:

The IP addresses present in the message cannot be replaced with the name of the VNF.

Exiting the VNF Console

Starting in Junos OS 24.1R1, when the user uses the escape sequence ^] the console session terminates, and the telnet command prompt is displayed to the user.

You must enter quit or close or you must enter qor c to exit from the terminal command prompt and return to Junos OS command prompt.

Creating the vSRX Virtual Firewall VNF on the NFX250 Platform

vSRX Virtual Firewall is a virtual security appliance that provides security and networking services in virtualized private or public cloud environments. It can be run as a virtual network function (VNF) on the NFX250 platform. For more details on vSRX Virtual Firewall, see the product documentation page on the Juniper Networks website at https://www.juniper.net/.

To activate the vSRX Virtual Firewall VNF from the Juniper Device Manager (JDM) command-line interface:

  1. Allocate hugepages memory:
  2. Define VLANs required for vSRX Virtual Firewall VNF interfaces. For example:
  3. Define any glue VLANs required for the vSRX Virtual Firewall VNF interfaces. For example:
  4. Define vSRX Virtual Firewall VNF with vSRX Virtual Firewall image. For example:
  5. (Optional) Create the vSRX Virtual Firewall VNF with groups that contain custom configuration. For example:
  6. Map the vSRX Virtual Firewall VNF interfaces to VLANs or glue-VLANs. For example:
  7. Specify a mode for the vSRX Virtual Firewall VNF interfaces. The interface mode can be either access or trunk mode. For example:
  8. Specify the maximum transmission unit (MTU) size for the media in bytes for vSRX Virtual Firewall VNF interfaces. MTU size can be either 1500 bytes or 2048 bytes. For example:
  9. Specify the target PCI address for the VNF interface. For example:
  10. At the CLI prompt, enter the commit command to activate the vSRX Virtual Firewall VNF.
  11. Attach the ISO to vSRX Virtual Firewall as a CD-ROM device and start vSRX Virtual Firewall.
    Note:

    If a vSRX Virtual Firewall instance is running, you must restart the instance so that the new configuration is applied from the CD-ROM.

  12. (Optional) To create the vSRX Virtual Firewall VNF with a custom bootstrap configuration, create an ISO image with the configuration file juniper.conf .
    Note:

    Ensure that the configuration file is named juniper.conf.

  13. Verify if the vSRX Virtual Firewall VNF initiated correctly. You can use JDM cli or Linux virsh commands to verify.

    Using the Linux virsh command

    You can see that the vSRX Virtual Firewall VNF is active.

  14. SSH connection to vSRX Virtual Firewall works only if the liveliness in the show output shows the status alive, that is if bootstrap iso config was used to enable DHCP on fxp0 interface of vSRX Virtual Firewall to get the internal management IP address). If the liveliness status for vSRX Virtual Firewall VNF is down, refer to Configuring the Internal Management IP Address of vSRX VNF.

    To log on to the vSRX Virtual Firewall VNF, enter the command run ssh vsrx.

  15. (Optional) Verify the vSRX Virtual Firewall VNF details.

Configuring the vMX Virtual Router as a VNF on NFX250

The vMX router is a virtual version of the Juniper MX Series 5G Universal Routing Platform. To quickly migrate physical infrastructure and services, you can configure vMX as a virtual network function (VNF) on the NFX250 platform. For more details on the configuration and management of vMX, see vMX Overview.

Before you configure the VNF, check the system inventory and confirm that the required resources are available. vMX as VNF must be designed and configured so that its resource requirements do not exceed the available capacity of the system. Ensure that a minimum of 20 GB space is available on NFX250.

To configure vMX as VNF on NFX250 using the Juniper Device Manager (JDM) command-line interface (CLI):

  1. Download the nested image available at vmx-nested-< release>.qcow2.
  2. Define VLANs required for the vMX VNF interfaces. For example:
  3. Define the glue VLANs required for the vMX VNF interfaces. For example:
  4. Define vMX for VNF with the vMX image. For example:

    user@host# set virtual-network-functions vmx image /var/third-party/images/vmx-nested-<release>.qcow2

  5. Specify the maximum primary memory that the VNF can use. For optimal performance, it is recommended to configure with at least 5 GB memory.

    user@host# set virtual-network-functions vmx memory size < n>

  6. Specify the number of cores per CPU in a virtual machine. For vMX VNF, you need a minimum of 4 virtual CPU cores.

    user@host# set virtual-network-functions vmx virtual-cpu count < n> features hardware-virtualization

  7. Add an additional data drive that stores the configuration parameters.

    user@host# set virtual-network-functions vmx storage vdc type disk file-type vmx-nested-< release>.qcow2

  8. Map the vMX VNF interfaces to VLANs or glue-VLANs.

    user@host# set virtual-network-functions vmx interfaces eth2 description wan0

    user@host# set virtual-network-functions vmx interfaces eth2 mapping vlan members < vlan>

    user@host# set virtual-network-functions vmx interfaces eth3 description wan1

    user@host# set virtual-network-functions vmx interfaces eth3 mapping vlan members < vlan>

  9. At the CLI prompt, enter the commit command to activate vMX VNF.

    user@host# commit

  10. Verify if the vMX VNF has been configured correctly on NFX250.

    root@jdm# run show virtual-network-functions

    If you use virsh, enter

    This shows that the vMX VNF is active.

  11. Verify if the vMX VNF has been configured correctly on NFX250.

    To upgrade the vMX VNF, deactivate the VNF configuration and select the new image copied to the /var/third-party/images/vmx-nested-< release>.qcow2 location. Reactivate the VNF configuration again.

  12. For in-band management network connections, the assigned management port is fxp0. For out-of-band management, ge-0/0/0 is used, and ge-0/0/1 is used for WAN interfaces.

Virtual Route Reflector on NFX250 Overview

The virtual Route Reflector (vRR) feature allows you to implement route reflector capability using a general purpose virtual machine that can be run on a 64-bit Intel-based blade server or appliance. Because a route reflector works in the control plane, it can run in a virtualized environment. A virtual route reflector on an Intel-based blade server or appliance works the same as a route reflector on a router, providing a scalable alternative to full mesh internal BGP peering.

Starting in Junos OS Release 17.3R1, you can implement the virtual route reflector (vRR) feature on the NFX250 Network Services platform. The Juniper Networks NFX250 Network Services Platform comprises the Juniper Networks NFX250 devices, which are Juniper Network’s secure, automated, software-driven customer premises equipment (CPE) devices that deliver virtualized network and security services on demand. NFX250 devices use the Junos Device Manager (JDM) for virtual machine (VM) lifecycle and device management, and for a host of other functions. The JDM CLI is similar to the Junos OS CLI in look and provides the same added-value facilities as the Junos OS CLI.

Note:
  • Starting in vRR Junos OS Release 20.1R1, both Linux Bridge (LB) and enhanced orchestration (EO) modes are supported for vRR. It is recommended to instantiate vRR VNF in EO mode.

  • Support for LB mode on NFX250 devices ended in NFX Junos OS Release 18.4.

  • Support for NFX-2 software architecture on NFX250 devices ended in NFX Junos OS Release 19.1R1.

  • Starting in NFX Host Release 21.4R2 and vRR Junos OS Release 21.4R2, you can deploy a vRR VNF on an NFX250 NextGen device. Only enhanced orchestration (EO) mode is supported for vRR.

Benefits of vRR

vRR has the following benefits:

  • Scalability: By implementing the vRR feature, you gain scalability improvements, depending on the server core hardware on which the feature runs. Also, you can implement virtual route reflectors at multiple locations in the network, which helps scale the BGP network with lower cost. The maximum routing information base (RIB) scale with IPv4 routes on NFX250 is 20 million.

  • Faster and more flexible deployment: You install the vRR feature on an Intel server, using open source tools, which reduces your router maintenance.

  • Space savings: Hardware-based route reflectors require central office space. You can deploy the virtual route reflector feature on any server that is available in the server infrastructure or in the data centers, which saves space.

For more information about vRR, refer Virtual Route Reflector (vRR) Documentation.

Software Requirements for vRR on NFX250

The following software components are required to support vRR on NFX250:

  • Juniper Device Manager: The Juniper Device Manager (JDM) is a low-footprint Linux container that supports Virtual Machine (VM) lifecycle management, device management, Network Service Orchestrator module, service chaining, and virtual console access to VNFs including vSRX Virtual Firewall, vjunos, and now vRR as a VNF.

  • Junos Control Plane: Junos Control Plane (JCP) is the Junos VM running on the hypervisor. You can use JCP to configure the network ports of the NFX250 device, and JCP runs by default as vjunos0 on NFX250. You can log on to JCP from JDM using the SSH service and the command-line interface (CLI) is the same as Junos.

Configuring vRR as a VNF on NFX250

You can configure vRR as a VNF in either Linux Bridge (LB) mode or Enhanced Orchestration (EO) mode.

Configuring vRR VNF on NFX250 in Linux Bridge Mode

Configuring Junos Device Manager (JDM) for vRR

By default, the Junos Device Manager (JDM) virtual machine comes up after NFX250 is powered on. By default, enhanced orchestration mode is enabled on JDM. While configuring vRR, disable enhanced orchestration mode, remove the interfaces configuration, and reboot the NFX device.

To configure the Junos Device Manager (JDM) virtual machine for vRR, perform the following steps:

  1. In configuration mode, at the [edit] hierarchy level, disable enhanced orchestration. By default, enhanced orchestration mode is enabled on JDM.
  2. Delete interface configuration.
  3. Set the JDM root password.
  4. Commit the configuration using the commit command and reboot the system for the configuration to take effect.
  5. After the system reboots, the default bridges configuration is available on JDM. Configure the JDM root password, management port IP, and add default routes.
    Note:

    After the system reboots, if the groups groups1604-configs is not present in the configuration, include it so the default bridges configuration is available on JDM.

Verifying that the Management IP is Configured

Purpose

Ensure that the management IP address has been configured accurately.

Action

From configuration mode, enter the show interface command.

Verifying that the Default Routes are Configured

Purpose

Ensure that the default routes are configured for DNS and gateway access.

Action

From configuration mode, enter the show route command.

Configuring Junos Control Plane (JCP) for vRR

By default, the Junos Control Plane (JCP) VM comes up after NFX250 is powered on. The JCP virtual machine controls the front panel ports in the NFX250 device. VLANs provide the bridging between the virtual route reflector VM interfaces and the JCP VMs using sxe ports. The front panel ports are configured as part of the same VLAN bridging of the VRR ports. As a result, packets are transmitted or received using these bridging ports between JCP instead of the vRR VNF ports.

To configure JCP for vRR, perform the following steps:

  1. In the operational mode, connect to the JCP virtual machine.
  2. Configure the front panel ports with the same VLAN bridging of vRR VNF ports. In this example, the front panel ports, ge-0/0/1, ge-0/0/10, and xe-0/0/12 are mapped with vRR VNF interfaces, em1, em2, and em3. The ge-0/0/1 (front panel port) maps to the sxe-0/0/0 (internal interface) which maps to em1 (vRR VNF interface). They are all part of the same VLAN (VLAN ID 100).
  3. Configure the VLANs and add the physical interface and the service interface as members of the same VLAN. In this example, we have 3 VLANs (100, 101, and 102).
  4. Configure MTU:
    Note:

    The maximum MTU that you can configure on JCP and vRR VNF interfaces is 1518 bytes.

  5. Verify that you have configured the mapping of interfaces accurately.

Launching vRR

You can launch the vRR VNF as a virtualized network function (VNF) using the XML configuration templates that are part of the vRR image archive.

  1. To launch the vRR VNF, use the virsh command and specify the VM name.

    Where, vrr is the virsh domain name specified in the vrr.xml.

  2. To create a VRR VNF of size 24GB with 2 virtual CPUs and 2 VNF interfaces (em2, em3) as shown in this example, you can use this sample configuration.
    Note:

    To create a vRR VNF, use the default memory allocation mode and not hugepages.

Enabling Liveliness Detection of vRR VNF from JDM

Liveliness of a VNF indicates if the IP address of the VM is accessible to the Junos Device Manager (JDM). If the liveliness of the VM is down, it implies that the VM is not reachable from JDM. You can view the liveliness of VMs using the show virtual-machines command. By default, the liveliness of vRR VNF is shown as down. Before creating the vRR VNF, it is recommended that you enable liveliness detection in JDM.

To enable liveliness detection of the vRR VNF from JDM, perform the following steps:

  1. To verify that liveliness detection of vRR VNF from JDM, issue the following command:
    Note:

    By default, the liveliness of vRR VNF is shown as down. You must enable liveliness detection of vRR VNF from the JDM.

  2. Create a dummy interface with internal bridge, virbr0, by modifying the network interface settings for the vRR VNF interface. stanza of the VM template like as shown below. The PCI details like bus, slot, function information can be based on your existing interfaces arbitrary running next number, especially the slot number.

    This is a sample network interface setting.

    You must change the settings as follows:

    When you are modifying the settings make sure that:

    • The interface type is 'bridge'.

    • The model type is e1000 to prevent problems with VLAN subinterfaces.

    • The PCI resource for the address is unique for this VM.

  3. To identify the MAC address associated with the virbr0 interface, use the virsh dumpxml vrr-vm-name command.

    This is the MAC address assigned by the vRR VNF to the virbr0 interface.

  4. To assign an IP address to the vRR VNF interface connected to virbr0 interface, you must use an IP, that is part of the internal network. In this example, the MAC address, 52:54:00:c4:fe:8d, assigned by the vRR VNF is associated with the em4 interface of the vRR VNF. So, we must configure the em4 interface with the IP address as shown in this step.

    The MAC address assigned by the vRR VNF in this example is associated with em4 interface.

  5. In Junos Device Manager (JDM), update the file /etc/hosts with the IP address and vRR VNF name.
    Note:

    When you update the /etc/hosts file, include space between the IP address and the vRR VNF name. Do not include tab spaces.

  6. Ping the IP address of the vRR VNF from JDM to verify that the internal bridge virbr0 is accessible from JDM.
  7. To verify that liveliness detection of vRR VNF from JDM, issue the following command:

    Now, the liveliness status of vRR VNF is shown as alive.

Configuring vRR VNF on NFX250 in Enhanced Orchestration Mode

Before you configure the vRR VNF, check the system inventory and confirm that the required resources are available by using show system visibility command. vRR as VNF must be designed and configured so that its resource requirements do not exceed the available capacity of the system.

You can instantiate the vRR VNF in Enhanced Orchestration (EO) mode by using the JDM CLI configuration and without using the XML descriptor file. EO mode uses Open vSwitch (OVS) as NFV backplane for bridging the interfaces.

To activate the vRR VNF from the Juniper Device Manager (JDM) CLI:

  1. Download the qcow2.img vRR image to the /var/third-party/images/ folder.
  2. Define the vRR VNF. For example:
  3. Define VLANs required for vSRX Virtual Firewall VNF interfaces. For example:
  4. Allocate hugepages as memory for the vRR VNF. For example:
  5. Specify the number of virtual CPUs required for the vRR VNF. It is recommended that at least two virtual CPUs are assigned to a vRR VNF. For example:
  6. Connect a virtual CPU to a physical CPU. For example:
  7. Configure the maximum transmission unit (MTU) on the vRR interface (eth2). For example:
  8. Configure the LAN-side internal-facing interface as a trunk port and add it to the LAN-side VLAN. For example:
  9. Configure the maximum transmission unit (MTU) on the vRR interface (eth3). For example:
    Note:

    MTU size can be either 1500 bytes or 2048 bytes.

  10. Configure the LAN-side internal-facing interface as a trunk port and add it to the LAN-side VLAN. For example:
  11. Specify the memory allocation for the vRR VNF. It is recommended that at least 4GB memory is allocated to the vRR VNF. For example:
  12. Configure hugepages for memory requirements. For example:
  13. Commit the configuration to activate vRR VNF. For example:

    After you commit the configuration, VNF takes some time to boot. The first interface (em0) is automatically given an IP address by DHCP from JDM for liveliness.

  14. Verify that the VNF is up. For example:
  15. (Optional) Verify the vRR VNF details. For example:

Configuring Cross-connect

The Cross-connect feature enables traffic switching between any two OVS interfaces such as VNF interfaces or physical interfaces such as hsxe0 and hsxe1 that are connected to the OVS. You can bidirectionally switch either all traffic or traffic belonging to a particular VLAN between any two OVS interfaces.

Note:

This feature does not support unidirectional traffic flow.

The Cross-connect feature supports the following:

  • Unconditional cross-connect between two VNF interfaces for all network traffic.

  • VLAN-based traffic forwarding between VNF interfaces support the following functions:

    • Provides an option to switch traffic based on a VLAN ID.

    • Supports network traffic flow from trunk to access port.

    • Supports network traffic flow from access to trunk port.

    • Supports VLAN PUSH, POP, and SWAP operations.

To configure cross-connect:

  1. Configure VLANs:
  2. Configure VNFs:
  3. Configure cross-connect:
    • Configure VLAN-based cross-connect:

    • Configure unconditional cross-connect

    • Configure cross-connect with VLAN SWAP operation enabled:

    • Configure cross-connect with VLAN PUSH or POP operation enabled:

    • Configure native VLAN traffic on cross-connect

Configuring Analyzer VNF and Port-mirroring

The Port-mirroring feature allows you to monitor network traffic. If the feature is enabled on a VNF interface, the OVS system bridge sends a copy of all network packets of that VNF interface to the analyzer VNF for analysis. You can use the port-mirroring or analyzer JDM commands for analyzing the network traffic.

Note:
  • Port-mirroring is supported only on VNF interfaces that are connected to an OVS system bridge.

  • VNF interfaces must be configured before configuring port-mirroring options.

  • If the analyzer VNF is active after you configure, you must restart the VNF for changes to take effect.

  • You can configure up to four input ports and only one output port for an analyzer rule.

  • Output ports must be unique in all analyzer rules.

  • After changing the configuration of the input VNF interfaces, you must de-activate and activate the analyzer rules referencing to it along with the analyzer VNF restart.

To configure the analyzer VNF and enable port-mirroring:

  1. Configure the analyzer VNF:
  2. Enable port-mirroring of the network traffic in the input and output ports of the VNF interface and analyzer VNF:

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
17.3R1
Starting in Junos OS Release 17.3R1, you can implement the virtual route reflector (vRR) feature on the NFX250 Network Services platform.