Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installing vMX on OpenStack

Read this topic to understand how to install vMX instance in the OpenStack environment.

Preparing the OpenStack Environment to Install vMX

Make sure the openstackrc file is sourced before you run any OpenStack commands.

To prepare the OpenStack environment to install vMX, perform these tasks:

Creating the neutron Networks

You must create the neutron networks used by vMX before you start the vMX instance. The public network is the neutron network used for the management (fxp0) network. The WAN network is the neutron network on which the WAN interface for vMX is added.

To display the neutron network names, use the neutron net-list command.

Note:

You must identify and create the type of networks you need in your OpenStack configuration.

You can use these commands as one way to create the public network:

  • For example:

  • For virtio, you can use these commands as one way to create the WAN network:

    For example:

  • For SR-IOV, you can use these commands as one way to create the WAN network:

    For example:

Preparing the Controller Node

Preparing the Controller Node for vMX

To prepare the controller node:

  1. Configure the controller node to enable Huge Pages and CPU affinity by editing the scheduler_default_filters parameter in the /etc/nova/nova.conf file. Make sure the following filters are present:

    Restart the scheduler service with this command.

    • For Red Hat: systemctl restart openstack-nova-scheduler.service

    • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-scheduler restart

  2. Update the default quotas.
    Note:

    We recommend these default values, but you can use different values if they are appropriate for your environment. Make sure the default quotas have enough allocated resources.

    Verify the changes with the nova quota-defaults command.

  3. Make sure the heat package is 5.0.1-6 or later. This package is part of rhel-7-server-openstack-8-rpms.

    Verify the version using the rpm -qa | grep heat command.

    Update the heat package with this command.

    • For Red Hat: yum update openstack-heat-engine

    • For Ubuntu (starting with Junos OS Release 17.2R1): apt-get install heat-engine

  4. Make sure the lsb (redhat-lsb-core or lsb-release) and numactl packages are installed.
    • For Red Hat:

    • For Ubuntu (starting with Junos OS Release 17.2R1):

Configuring the Controller Node for virtio Interfaces

To configure the virtio interfaces:

  1. Enable the VLAN mechanism driver by adding vlan to the type_drivers parameter in the /etc/neutron/plugins/ml2/ml2_conf.ini file.
  2. Add the bridge mapping to the /etc/neutron/plugins/ml2/ml2_conf.ini file by adding the following line:

    For example, use the following setting to add a bridge mapping for the physical network physnet1 mapped to the OVS bridge br-vlan.

  3. Configure the VLAN ranges used for the physical network in the /etc/neutron/plugins/ml2/ml2_conf.ini file, where physical-network-name is the name of the neutron network that you created for the virtio WAN network.

    For example, use the following setting to configure the VLAN ranges used for the physical network physnet1.

  4. Restart the neutron server.
    • For Red Hat: systemctl restart neutron-server

    • For Ubuntu (starting with Junos OS Release 17.2R1): service neutron-server restart

  5. Add the OVS bridge that was mapped to the physical network and the virtio interface (eth2).

    For example, use the following commands to add OVS bridge br-vlan and eth2 interface:

Configuring the Controller Node for SR-IOV Interfaces

Note:

If you have more than one SR-IOV interface, you need one dedicated physical 10G interface for each additional SR-IOV interface.

Note:

In SRIOV mode, the communication between the Routing Engine (RE) and packet forwarding engine is enabled using virtio interfaces on a VLAN-provider OVS network. Because of this, a given physical interface cannot be part of both VirtIO and SR-IOV networks.

To configure the SR-IOV interfaces:

  1. Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file to add sriovnicswitch as a mechanism driver and the VLAN ranges used for the physical network.

    For example, use the following setting to configure the VLAN ranges used for the physical network physnet2.

    If you add more SR-IOV ports, you must add the VLAN range used for each physical network (separated by a comma). For example, use the following setting when configuring two SR-IOV ports.

  2. Edit the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini file to add details about PCI devices.
  3. Add the –-config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as highlighted to the neutron server file.
    • For Red Hat:

      Edit the /usr/lib/systemd/system/neutron-server.service file as highlighted.

      Use the systemctl restart neutron-server command to restart the service.

    • For Ubuntu (starting with Junos OS Release 17.2R1):

      Edit the /etc/init/neutron-server.conf file as highlighted.

      Use the service neutron-server restart command to restart the service.

  4. To allow proper scheduling of SR-IOV devices, the compute scheduler must use the FilterScheduler with the PciPassthroughFilter filter.

    Make sure the PciPassthroughFilter filter is configured in the /etc/nova/nova.conf file on the controller node.

    Restart the scheduler service.

    • For Red Hat: systemctl restart openstack-nova-scheduler

    • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-scheduler restart

Preparing the Compute Nodes

Preparing the Compute Node for vMX

Note:

You no longer need to configure the compute node to pass metadata to the vMX instances by including the config_drive_format=vfat parameter in the /etc/nova/nova.conf file.

To prepare the compute node:

  1. Configure each compute node to support Huge Pages at boot time and reboot.
    • For Red Hat: Add the Huge Pages configuration.

      Use the mount | grep boot command to determine the boot device name.

    • For Ubuntu (starting with Junos OS Release 17.2R1): Add the Huge Pages configuration to /etc/default/grub under the GRUB_CMDLINE_LINUX_DEFAULT parameter.

    After the reboot, verify that Huge Pages are allocated.

    The number of Huge Pages depends on the amount of memory for the VFP, the size of Huge Pages, and the number of VFP instances. To calculate the number of Huge Pages: (memory-for-vfp / huge-pages-size) * number-of-vfp

    For example, if you run four vMX instances (four VFPs) in performance mode using 12G of memory and 2M of Huge Pages size, then the number of Huge Pages as calculated by the formula is (12G/2M)*4 or 24576.

    Note:

    Starting in Junos OS Release 15.1F6 and in later releases, performance mode is the default operating mode. For details, see Enabling Performance Mode or Lite Mode.

    Note:

    Ensure that you have enough physical memory on the compute node. It must be greater than the amount of memory allocated to Huge Pages because any other applications that do not use Huge Pages are limited by the amount of memory remaining after allocation for Huge Pages. For example, if you allocate 24576 Huge Pages and 2M Huge Pages size, you need 24576*2M or 48G of memory for Huge Pages.

    You can use the vmstat -s command and look at the total memory and used memory values to verify how much memory is left for other applications that do not use Huge Pages.

  2. Enable IOMMU in the /etc/default/grub file. Append the intel_iommu=on string to any existing text for the GRUB_CMDLINE_LINUX parameter.

    Regenerate the grub file.

    • For Red Hat: grub2-mkconfig -o /boot/grub2/grub.cfg

    • For Ubuntu (starting with Junos OS Release 17.2R1): update-grub

    Reboot the compute node.

  3. Add bridge for Virtio network, and configure physnet1:

    For example, an OVS bridge, br-vlan is added. (This is the same br-vlan which was added in bridge_mappings in ml2_conf.ini above on controller. See Configuring the Controller Node for virtio Interfaces). To this bridge, add the eth2 interface, which can be used for Virtio communication between VMs.

    In /etc/neutron/plugins/ml2/openvswitch_agent.ini, append physnet1:br-vlan string:

    Restart neutron service.

    • Redhat:

      systemctl restart neutron-openvswitch-agent.service

      systemctl restart openstack-nova-compute.service

    • Ubuntu

      service nova-compute restart

      service neutron-plugin-openvswitch-agent restart

Configuring the Compute Node for SR-IOV Interfaces

Note:

If you have more than one SR-IOV interface, you need one physical 10G Ethernet NIC card for each additional SR-IOV interface.

To configure the SR-IOV interfaces:

  1. Load the modified IXGBE driver.

    Before compiling the driver, make sure gcc and make are installed.

    • For Red Hat:

    • For Ubuntu (starting with Junos OS Release 17.2R1):

    Unload the default IXGBE driver, compile the modified Juniper Networks driver, and load the modified IXGBE driver.

    Verify the driver version on the eth4 interface.

    For example, in the following sample, the command displays driver version (3.19.1):

  2. Create the virtual function (VF) on the physical device. vMX currently supports only one VF for each SR-IOV interface (for example, eth4).

    Specify the number of VFs on each NIC. The following line specifies that there is no VF for eth2 (first NIC) and one VF for eth4 (second NIC with SR-IOV interface).

    To verify that the VF was created, the output of the ip link show eth4 command includes the following line:

    To make sure that the interfaces are up and SR-IOV traffic can pass through them, execute these commands to complete the configuration.

  3. Install the SR-IOV agent.
    • For Red Hat: sudo yum install openstack-neutron-sriov-nic-agent

    • For Ubuntu (starting with Junos OS Release 17.2R1): sudo apt-get install neutron-plugin-sriov-agent

  4. Add the physical device mapping to the /etc/neutron/plugins/ml2/sriov_agent.ini file by adding the following line:

    For example, use the following setting to add a bridge mapping for the physical network physnet2 mapped to the SR-IOV interface eth4.

    If you add more SR-IOV ports, you must add the bridge mapping for each physical network (separated by a comma). For example, use the following setting when adding SR-IOV interface eth5 for physical network physnet3.

  5. Edit the SR-IOV agent service file to add –-config-file /etc/neutron/plugins/ml2/sriov_agent.ini as highlighted.
    • For Red Hat:

      Edit the /usr/lib/systemd/system/neutron-sriov-nic-agent.service file as highlighted.

      Enable and start the SR-IOV agent.

      Use the systemctl status neutron-sriov-nic-agent.service command to verify that the agent has started successfully.

    • For Ubuntu (starting with Junos OS Release 17.2R1):

      Edit the /etc/init/neutron-plugin-sriov-agent.conf file as highlighted.

      Make sure that /etc/neutron/plugins/ml2/sriov_agent.ini has the correct permissions and neutron is the group of the file.

      Use the service neutron-plugin-sriov-agent start command to start the SR-IOV agent.

      Use the service neutron-plugin-sriov-agent status command to verify that the agent has started successfully.

  6. Edit the /etc/nova/nova.conf file to add the PCI passthrough allowlist entry for the SR-IOV device.

    For example, this entry adds an entry for the SR-IOV interface eth4 for the physical network physnet2.

    If you add more SR-IOV ports, you must add the PCI passthrough allowlist entry for each SR-IOV interface (separated by a comma). For example, use the following setting when adding SR-IOV interface eth5 for physical network physnet3.

    Restart the compute node service.

    • For Red Hat: systemctl restart openstack-nova-compute

    • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-compute restart

Installing vMX

After preparing the OpenStack environment, you must create nova flavors and glance images for the VCP and VFP VMs. Scripts create the flavors and images based on information provided in the startup configuration file.

Setting Up the vMX Configuration File

The parameters required to configure vMX are defined in the startup configuration file.

To set up the configuration file:

  1. Download the vMX KVM software package from the vMX page and uncompress the package.

    tar xvf package-name

  2. Change directory to the location of the files.

    cd package-location/openstack/scripts

  3. Edit the vmx.conf text file with a text editor to create the flavors for a single vMX instance.

    Based on your requirements, ensure the following parameters are set properly in the vMX configuration file:

    • re-flavor-name

    • pfe-flavor-name

    • vcpus

    • memory-mb

    See Specifying vMX Configuration File Parameters for information about the parameters.

    Sample vMX Startup Configuration File

    Here is a sample vMX startup configuration file for OpenStack:

Specifying vMX Configuration File Parameters

The parameters required to configure vMX are defined in the startup configuration file (scripts/vmx.conf). The startup configuration file generates a file that is used to create flavors. To create new flavors with different vcpus or memory-mb parameters, you must change the corresponding re-flavor-name or pfe-flavor-name parameter before creating the new flavors.

To customize the configuration, perform these tasks:

Configuring the Host

To configure the host, navigate to HOST and specify the following parameters:

  • virtualization-type—Mode of operation; must be openstack.

  • compute—(Optional) Names of the compute node on which to run vMX instances in a comma-separated list. If this parameter is specified, it must be a valid compute node. If this parameter is specified, vMX instance launched with flavors are only run on the specified compute nodes.

    If this parameter is not specified, the output of the nova hypervisor-list command provides the list of compute nodes on which to run vMX instances.

Configuring the VCP VM

To configure the VCP VM, you must provide the flavor name.

Note:

We recommend unique values for the re-flavor-name parameter because OpenStack can create multiple entries with the same name.

To configure the VCP VM, navigate to CONTROL_PLANE and specify the following parameters:

  • re-flavor-name—Name of the nova flavor.

  • vcpus—Number of vCPUs for the VCP; minimum is 1.

    Note:

    If you change this value, you must change the re-flavor-name value before running the script to create flavors.

  • memory-mb—Amount of memory for the VCP; minimum is 4 GB.

    Note:

    If you change this value, you must change the re-flavor-name value before running the script to create flavors.

Configuring the VFP VM

To configure the VFP VM, you must provide the flavor name. Based on your requirements, you might want to change the memory and number of vCPUs. See Minimum Hardware Requirements for minimum hardware requirements.

To configure the VFP VM, navigate to FORWARDING_PLANE and specify the following parameters:

  • pfe-flavor-name—Name of the nova flavor.

  • memory-mb—Amount of memory for the VFP; minimum is 12 GB (performance mode) and 4 GB (lite mode).

    Note:

    If you change this value, you must change the pfe-flavor-name value before running the script to create flavors.

  • vcpus—Number of vCPUs for the VFP; minimum is 7 (performance mode) and 3 (lite mode).

    Note:

    If you specify less than 7 vCPUs, the VFP automatically switches to lite mode.

    Note:

    If you change this value, you must change the pfe-flavor-name value before running the script to create flavors.

Creating OpenStack Flavors

To create flavors for the VCP and VFP, you must execute the script on the vMX startup configuration file (vmx.conf).

To create OpenStack flavors:

  1. Run the vmx_osp_create_flavor.py with the startup configuration file to generate the vmx_osp_flavors.sh file that creates flavors.

    ./vmx_osp_create_flavor.py vmx.conf

  2. Execute the vmx_osp_flavors.sh to create flavors.

    sh vmx_osp_flavors.sh

Installing vMX Images for the VCP and VFP

To install the vMX OpenStack glance images for the VCP and VFP, you can execute the vmx_osp_images.sh script. The script adds the VCP image in qcow2 format and the VFP file in vmdk format.

To install the VCP and VFP images:

  1. Download the vMX KVM software package from the vMX page and uncompress the package.

    tar xvf package-name

  2. Verify the location of the software images from the uncompressed vMX package. See vMX Package Contents.

    ls package-location/images

  3. Change directory to the location of the vMX OpenStack script files.

    cd package-location/openstack/scripts

  4. Run the vmx_osp_images.sh script to install the glance images.

    sh vmx_osp_images.sh vcp-image-name vcp-image-location vfp-image-name vfp-image-location

    Note:

    You must specify the parameters in this order.

    • vcp-image-name—Name of the glance image.

    • vcp-image-location—Absolute path to the junos-vmx-x86-64*.qcow2 file for launching VCP.

    • vfp-image-name—Name of the glance image.

    • vfp-image-location—Absolute path to the vFPC-*.img file for launching VFP.

For example, this command installs the VCP image as re-test from the /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2 file and the VFP image as fpc-test from the /var/tmp/vFPC-20170117.img file.

sh vmx_osp_images.sh re-test /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2 fpc-test /var/tmp/vFPC-20170117.img

To view the glance images, use the glance image-list command.

Starting a vMX Instance

To start a vMX instance, perform these tasks:

Modifying Initial Junos OS Configuration

When you start the vMX instance, the Junos OS configuration file found in package-location/openstack/vmx-components/vms/vmx_baseline.conf is loaded. If you need to change this configuration, make any changes in this file before starting the vMX.

Note:

If you create your own vmx_baseline.conf file or move the file, make sure that the package-location/openstack/vmx-components/vms/re.yaml references the correct path.

Launching the vMX Instance

To create and start the vMX instance:

  1. Modify these parameters in the package-location/openstack/1vmx.env environment file for your configuration. The environment file is in YAML format starting in Junos OS Release 17.4R1.
    • net_id1—Network ID of the existing neutron network used for the WAN port. Use the neutron net-list command to display the network ID.

    • public_network—Network ID of the existing neutron network used for the management (fxp0) port. Use the neutron net-list | grep public command to display the network ID.

    • fpc_img—Change this parameter to linux-img. Name of the glance image for the VFP; same as the vfp-image-name parameter specified when running the script to install the vMX images.

    • vfp_image—Name of the glance image for the VFP; same as the vfp-image-name parameter specified when running the script to install the vMX images (applicable for For Junos OS Releases 17.3R1 and earlier).

    • fpc_flav—Change this parameter to linux-flav. Name of the nova flavor for the VFP; same as the pfe-flavor-name parameter specified in the vMX configuration file.

    • vfp_flavor—Name of the nova flavor for the VFP; same as the pfe-flavor-name parameter specified in the vMX configuration file (applicable for Junos OS Releases 17.3R1 and earlier).

    • junos_flav—Name of the nova flavor for the VCP; same as the re-flavor-name parameter specified in the vMX configuration file.

    • vcp_flavor—Name of the nova flavor for the VCP; same as the re-flavor-name parameter specified in the vMX configuration file (applicable for Junos OS Releases 17.3R1 and earlier).

    • junos_img—Name of the glance image for the VCP; same as the vcp-image-name parameter specified when running the script to install the vMX images.

    • vcp_image—Name of the glance image for the VCP; same as the vcp-image-name parameter specified when running the script to install the vMX images (applicable for Junos OS Releases 17.3R1 and earlier).

    • project_name—Any project name. All resources will use this name as the prefix.

    • gateway_ip—Gateway IP address.

  2. Start the vMX instance with the heat stack-create –f 1vmx.yaml –e 1vmx.env vmx-name command.

    This sample configuration starts a single vMX instance with one WAN port and one FPC.

  3. Verify that the vMX instance is created with the heat stack-list | grep vmx-name command.
  4. Verify that the VCP and VFP VMs exist with the nova-list command.
  5. Access the VCP or the VFP VM with the nova get-vnc-console nova-id novnc command, where nova-id is the ID of the instance displayed in the nova-list command output.
Note:

You must shut down the vMX instance before you reboot host server using the request system halt command.