Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Juniper Networks OpenStack Neutron Plug-in for Inter VXLAN Routing

    Note: Please check the plug-in download page for the latest version of this document.

    Plug-In Overview

    This document describes the Juniper OpenStack Neutron Plugin for layer 3 routing between two or more VXLANs using an MX router and an NSX controller. The MX router acts as a Transport Node on an NSX cluster and provides Inter VXLAN routing service.

    Juniper plug-in for inter VXLAN routing is supported on the following releases of Junos OS, OpenStack and VMWare NSX-MH:

    • Junos OS version 14.1R2 with Junos SDN package
    • OpenStack: Havana, IceHouse
    • VMWare NSX-MH version 4.1.2

    Deployment of Juniper Neutron Plugin assumes a fully functional NSX cluster configured for OpenStack Neutron. This involves configuring the NSX cluster (including NSX controller, Service Node, NSX Manager), adding Transport Nodes (all compute nodes in the OpenStack cluster) and configuring NSX plugin for Neutron. The details of this process are not covered by this document. Refer to the NSX-MH documentation for the detailed procedure.

    Juniper Neutron plug-in for inter-VXLAN routing extends the NSX Neutron plug-in by adding support for MX series router to provide layer 3 routing service.

    Pre-requisites for Using the Plug-In

    Before you use the plug-in:

    • Install the NSX Plug-in

      The NSX plugin configuration must provide the following mandatory configuration values in either the /etc/neutron/plugins/nicira/nvp.ini (for Nicira plugin in Havana) or /etc/neutron/plugins/vmware/nsx.ini(for vmware plugin in IceHouse) files.

      • nsx_controller
      • default_tz_uuid
      • nsx_user
      • nsx_password
      • default_transport_type must be set to vxlan

      Ensure that the Neutron server loads this configuration file by updating the init scripts.

      Refer to for the complete installation procedure.

    • Install ncclient Python library for NETCONF client (see on Neutron server

      Install the Juniper Neutron plug-in on Neutron Server using the Operating System package installer. To install Juniper plug-in for inter-VXLAN routing use one of the following commands:

      • # dpkg –i python-neutron-plugin-juniper_X.XXX-X_all.deb (Ubuntu)
      • # rpm –ivh neutron-plugin-juniper-X.XXX-X.noarch.rpm (RedHat/CentOS)


    This section describes:

    1. Configuring the Juniper Plug-In
    2. Configuring the MX Router with NSX
    3. Plug-In Configuration Options

    Configuring the Juniper Plug-In

    To configure Juniper plug-in with Neutron:

    1. Update the core plugin in /etc/neutron/neutron.conf to juniper_nsx plugin with the following details:

      core_plugin = neutron.plugins.juniper_nsx.plugin.JuniperNsx (for IceHouse with VMWare plugin)

      core_plugin = neutron.plugins.juniper_nsx.nvp_plugin.JuniperNsx (for Havana with Nicira plugin)

    2. Add the MX router that will provide the inter-VXLAN routing:

      Note: Ensure that the MX device is running Junos OS version 14.1 R2 along with the Junos SDN package.

      jnpr_device add -d dns name/IP address of the MX router -c router -u root user -p root password -t VTEP IP


      • –d : DNS resolvable name or management IP address of the router
      • –t : The VTEP IP address to be configured on the MX. This IP address must be routable from all other VTEP IPs on the compute and service nodes.
      • –u: username for ssh login to MX router
      • –p: password for ssh login to MX router
      • –h: option to be used with the command to get full usage text.

      This command configures the MX router with VTEP IP, setup the VXLAN interface for VXLAN to VLAN conversion.

    3. Update the init script to start the Neutron server with both NSX and Juniper plug-in configuration, and restart the Neutron server.

    Configuring the MX Router with NSX

    This section describes the steps to be followed to add an MX router to the NSX cluster as a GATEWAY node on NSX Manager. Note that VXLAN and ovsdb functionality on MX is available with Junos OS version 14.1R2 and above. You must also install the Junos SDN software suite.

    To configure the MX router:

    1. The MX router must be added to NSX as a Transport Node of type GATEWAY.
    2. In the Gateway screen, select the option VTEP enabled.
    3. In the Credentials section, select Management Address and enter the IP address.
    4. In the Transport Connector section, add a VXLAN connector for the router. Select the transport zone that was provided in the NSX plugin configuration. In the IP Address field, enter the VTEP IP address of the MX router.
    5. Create and copy client certificate for the MX router. This step needs to be done on any Linux server installed with Open vSwitch (can be done on one of the compute nodes)
      # mkdir /tmp/mx_certs
      # cd /tmp/mx_certs # ovs-pki init # ovs-pki req+sign vtep # ls vtep-cert.pem.tmp23022 vtep-privkey.pem vtep-req.pem # scp *.pem root management ip of MX <management ip of MX>: management ip of MX
    6. On the MX router the command to check whether the router is connected to the controller
      show ovsdb controller
      VTEP controller information:
      Controller IP address controller IP
      Controller protocol: ssl
      Controller port: 6632
      Controller connection: up
      Controller seconds-since-connect: 1303122
      Controller seconds-since-disconnect: 0
      Controller connection status: active

      The controller IP is picked up from the NSX plugin configuration file.

    Plug-In Configuration Options

    The plugin can be configured to use custom values for orchestrating MX for VXLAN routing.

    Table 1: Plug-in Configuration Options


    Default Value




    VLAN pool for allocation of VLAN ID



    VRF route distinguisher pool



    Timeout for committing changes to the MX router



    Number of times to retry connection to MX

    Given below is a sample configuration section that can be added to the /etc/neutron/neutron.conf file.

    vxlan_vlan_pool = 10:4000
    vxlan_rd_pool = 10:4000
    vxlan_vswitch_routing_instance = default-OVSDB

    Note: The value for vxlan_vswitch_routing_instance must be set before using the CLI on the Juniper device and must remain constant thereafter.

    Additional Notes

    This section describes:

    1. Planning the Underlay Network

    Planning the Underlay Network

    The Underlay network is the IP network over which VXLAN tunnels are created. All VTEP IPs are part of the underlay network. VTEP IPs are configured on each compute node, service node, and gateway node.

    All the VTEP IPs in a transport zone must be able to reach each other. On the MX router, the VTEP IP is configured on the loopback interface typically on the lo0 interface. This configuration is automatically done by the jnpr_device command provided with the plug-in. Additional routes might needed to be added to the hypervisors.

    Additional Information

    For more information about the plug-in, write to .

    Published: 2014-10-20