Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Juniper Networks Plug-In for OpenStack Neutron

    OpenStack is a cloud operating system, using which public, private, or hybrid clouds can be built using commodity hardware. In order to provide high performance and throughput, various network vendors specializing in networking gear have utilized the plug-in mechanism offered by Neutron and have moved out the L2, L3, Firewall, VPN and Load balancing services on to their respective networking devices.

    Juniper Networks provides OpenStack Neutron plugins which enable integration and orchestration of Juniper’s EX, QFX switches and SRX devices in the customer’s network.

    • 3.0 Release Notes

      The following features have been introduced in 3.0 release

      • VPNaaS (VPN-as-a-Service) support
      • VXLAN L3 Routing with EVPN
      • EVPN Multi-homing
      • EVPN Baremetal Support
      • NAT (SNAT & DNAT) support
    • 2.7 Release Notes

      As of release 2.7.1 the following features are supported by Juniper Networks’ OpenStack Neutron plugins:

      • The ML2 mechanism driver supports orchestration of layer 2 networks using
        • VLANs
        • VXLAN with EVPN control plane (using Hierarchical Port Binding)
      • The L3 service plug-in that orchestrates
        • IRBs and Routing Instances
        • Setting Static Routes
        • Setting Default routes for external access
      • The FWaaS service plugin
        • Orchestrates firewall policies on SRX/vSRX devices
        • Allocates and orchestrates Dedicated perimeter firewalls
      • Topology Service exposes the physical topology of the OpenStack cluster nodes and Top of Rack switches.

    Pre-requisites

    To use the Juniper Neutron plugins, the following prerequisites and dependencies should be met.

    • OpenStack
      • Releases supported Liberty and Mitaka.
    • Operating Systems
      • Ubuntu 14
      • Centos 7
    • Devices
      • Switching Platforms – EX and QFX
      • Routing and Security – SRX and vSRX
    • Python
      • Python, version 2.7
    • External Libraries
      • ncclient python library

    Installation

    Juniper plug-in binaries are available for CentOS as RPM and for Ubuntu as Deb packages. The plug-in must be installed only on the OpenStack Controller Node where the OpenStack Neutron server is running. Use the steps below to install plug-ins on the appropriate operating system.

    1. The packages can be downloaded from https://www.juniper.net/support/downloads/?p=qpluginopen#sw. Extract the binaries using command tar -xvf juniper_plugins_version.tar.gz The extracted folder would have packages for Centos and Ubuntu. The plugins are provided as a set of packages. All neutron drivers and service plugins are provided as a single package and can be installed as follows.
      • CentOS
        rpm -ivh juniper_plugins_version/centos/ neutron-plugin-juniper-version.noarch.rpm
      • Ubuntu
        sudo dpkg –i juniper_plugins_version/ubuntu/python-neutron-plugin-juniper_version_all.deb

      Other packages provide features like Horizon UI extensions, physical topology plug-in and a neutron client extension to support physical topology APIs. These packages can be installed in a similar fashion.

      The UI packages and neutron client extension must be installed on the server running Horizon.

    Topology Setup Reference

    The below topology will be used as a reference to illustrate the steps the administrator needs to do to define his topology:

    In the reference topology, there are three switches (Switch1, Switch2, and Aggregation Switch) and one SRX device.

    • At the Aggregation Switch, set all of the ports, ge-0/0/1, ge-0/0/2 and ge-0/0/45 to Trunk All mode.
    • Set Switch1: ge-0/0/2 and Switch2: ge-0/0/1 to Trunk All mode.
    • Enable vlan-tagging on the port ge-0/0/10 of the SRX.
    1. CLI commands

      Juniper Neutron plugins include some CLI tools which enables the administrator to define the network topology. The plugins depend on the topology definition to carry out network orchestration. The following tools are provided:

      Table 1: CLI Tools

      Name

      Description

      jnpr_device

      Add device details

      jnpr_nic_mapping

      Add a mapping between physical network alias (ex: Physnet1) to the corresponding Ethernet interface on the node.

      jnpr_switchport_mapping

      Add a mapping between the Compute/Network Node and its Ethernet Interface to the switch and the port that it is connected to.

      jnpr_device_port

      Define the downlink port of the router or firewall on which RVI for each Tenant’s VLAN gets created.

      jnpr_allocate_device

      Define allocation of Router and Firewall to a tenant or group of tenants.

      jnpr_vrrp_pool

      Define the VRRP pool.

      In the following section we will use these CLI commands to configure the Juniper Networks’ plugin with our reference topology.

    2. Add the devices to the topology
      1. Switches
        • To add a device to the topology, enter the following command:

          Note: Please use a login credential with super-user class privilege on the device.

          jnpr_device add -d device name or IP_address_of_the_device -c {switch, router, firewall} -u username -p device password

        In the reference topology, the two switches (Switch1 and Switch2) are connected to the hypervisors. Enter the following commands to add and list the switches:

        Adding Switch1:

        admin@controller:~$ jnpr_device add -d switch1.juniper.net -c switch -u root -p password
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        |        Device       |       Ip      | Device Type |  model  | login | vtep | vrrp_priority |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        | switch1.juniper.net | 10.107.52.136 |    switch   | qfx3500 |  root |  0   |       0       |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        

        Adding Switch2:

        admin@controller:~$ jnpr_device add -d switch2.juniper.net -c switch -u root -p password
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        |        Device       |       Ip      | Device Type |  model  | login | vtep | vrrp_priority |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        | switch2.juniper.net | 10.107.52.137 |    switch   | qfx3500 |  root |  0   |       0       |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        

        Listing the added switches:

        admin@controller:~$ jnpr_device list
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        |        Device       |       Ip      | Device Type |  model  | login | vtep | vrrp_priority |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        | switch1.juniper.net | 10.107.52.136 |    switch   | qfx3500 |  root |  0   |       0       |
        | switch2.juniper.net | 10.107.52.137 |    switch   | qfx3500 |  root |  0   |       0       |
        +---------------------+---------------+-------------+---------+-------+------+---------------+
        
      2. Adding the router to the topology

        In the reference topology, the SRX acts as both a router as well as a firewall. Enter the following command to add the router:

        admin@controller:~$ jnpr_device add -d srx.juniper.net -c router -u root -p password
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        |      Device     |       Ip      | Device Type | model | login | vtep | vrrp_priority |
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        | srx.juniper.net | 10.107.23.103 |    router   |   srx  |  root |  0   |       0      |
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        
      3. Adding the firewall to the topology

        In the reference topology, the SRX acts as both a router as well as a firewall. Enter the following command to add the firewall:

        admin@controller:~$ jnpr_device add -d srx.juniper.net -c firewall -u root -p password
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        |      Device     |       Ip      | Device Type | model | login | vtep | vrrp_priority |
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        | srx.juniper.net | 10.107.23.103 |   firewall  |   srx  |  root |  0   |       0      |
        | srx.juniper.net | 10.107.23.103 |    router   |   srx  |  root |  0   |       0      |
        +-----------------+---------------+-------------+-------+-------+------+---------------+
        
    3. Defining the NIC to the physical network mapping for each hypervisor

      In OpenStack, you generally define an alias for the physical network and its associated bridge by using the following configuration in /etc/neutron/plugins/ml2/ml2_conf.ini on the NetworkNode, and for all the compute nodes:

      [ovs]
       tenant_networ_type = vlan
       bridge_mappings = physnet1:br-eth1

      Because you can connect the bridge br-eth1 to any physical interface, you must add the link between the bridge br-eth1 and the physical interface to the topology by entering following command:

      jnpr_device add -H Compute hostname -b physical_network alias name -n NIC

      1. Adding Hypervisor 1
        admin@controller:~$ jnpr_nic_mapping add -H hypervisor1.juniper.net -b physnet1 -n eth1

        Adding mapping

        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.107.65.101 |  physnet1  | eth1 |
        +---------------+------------+------+
        
      2. Adding Hypervisor 2
        admin@controller:~$ jnpr_nic_mapping add -H hypervisor2.juniper.net -b physnet1 -n eth1

        Adding mapping

        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.107.65.102 |  physnet1  | eth1 |
        +---------------+------------+------+
        
      3. Adding Hypervisor 5 (Note that it is mapped to physnet1-- br-eth1 -- eth2)
        admin@controller:~$ jnpr_nic_mapping add -H hypervisor5.juniper.net -b physnet1 -n eth2

        Adding mapping

        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.107.65.105 |  physnet1  | eth2 |
        +---------------+------------+------+
        
      4. Adding Hypervisor 6
        admin@controller:~$ jnpr_nic_mapping add -H hypervisor6.juniper.net -b physnet1 -n eth1

        Adding mapping

        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.107.65.106 |  physnet1  | eth1 |
        +---------------+------------+------+
        
      5. Adding Network Node
        admin@controller:~$ jnpr_nic_mapping add -H networknode.juniper.net -b physnet1 -n eth1

        Adding mapping

        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.108.10.100 |  physnet1  | eth1 |
        +---------------+------------+------+
        
      6. Listing all the Mappings
        admin@controller:~$ jnpr_nic_mapping list
        +---------------+------------+------+
        |      Host     | BridgeName | Nic  |
        +---------------+------------+------+
        | 10.107.65.101 |  physnet1  | eth1 |
        | 10.107.65.102 |  physnet1  | eth1 |
        | 10.107.65.105 |  physnet1  | eth2 |
        | 10.107.65.106 |  physnet1  | eth1 |
        | 10.108.10.100 |  physnet1  | eth1 |
        +---------------+------------+------+
        
    4. Defining the mapping from the compute to the switch

      To configure the VLANs on the switches, the ML2 plugin must determine the port of the switch on which the hypervisor is connected through its ethernet interface. This provides the plugin an overall view of the topology between physnet1 -- br-eth1 -- eth1 -- Switch-x: ge-0/0/x. You can determine this information by either enabling LLDP, or by configuring it using the provided CLI. The following example shows how CLI can be used:

      jnpr_switchport_mapping add -h Compute hostname -n NIC -s Switch IP or Switch Name -p Switch Port

      1. Mapping Hypervisor 1 to Switch 1
        admin@controller:~$ jnpr_switchport_mapping add -H hypervisor1.juniper.net -n eth1 -s switch1.juniper.net -p ge/0/0/10

        Database updated with switch port binding

        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 |           |
        +---------------+------+---------------+-----------+-----------+
        
      2. Mapping Hypervisor 2 to Switch 1
        admin@controller:~$ jnpr_switchport_mapping add -H hypervisor2.juniper.net -n eth1 -s switch1.juniper.net -p ge/0/0/20

        Database updated with switch port binding

        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 |           |
        +---------------+------+---------------+-----------+-----------+
        
      3. Mapping Hypervisor 5 to Switch 2
        admin@controller:~$ jnpr_switchport_mapping add -H hypervisor5.juniper.net -n eth2 -s switch2.juniper.net -p ge/0/0/20

        Database updated with switch port binding

        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 |           |
        +---------------+------+---------------+-----------+-----------+
        
      4. Mapping Hypervisor 6 to Switch 2
        admin@controller:~$ jnpr_switchport_mapping add -H hypervisor6.juniper.net -n eth1 -s switch2.juniper.net -p ge/0/0/30

        Database updated with switch port binding

        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 |           |
        +---------------+------+---------------+-----------+-----------+
        
      5. Mapping Network Node to Switch 2
        admin@controller:~$ jnpr_switchport_mapping add -H networknode.juniper.net -n eth1 -s switch2.juniper.net -p ge/0/0/5

        Database updated with switch port binding

        +---------------+------+---------------+----------+-----------+
        |      Host     | Nic  |     Switch    |   Port   | Aggregate |
        +---------------+------+---------------+----------+-----------+
        | 10.108.10.100 | eth1 | 10.107.52.137 | ge/0/0/5 |           |
        +---------------+------+---------------+----------+-----------+
        
      6. 5.3.6 Listing all the Mappings
        admin@controller:~$ jnpr_switchport_mapping list
        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 |           |
        | 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 |           |
        | 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 |           |
        | 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 |           |
        | 10.108.10.100 | eth1 | 10.107.52.137 |  ge/0/0/5 |           |
        +---------------+------+---------------+-----------+-----------+
        
    5. Define the downlink port on the SRX device (Router) on which the RVI is created by the plugin

      Update the plugin database with the port on the SRX device to which the Aggregation Switch is connected.

      jnpr_device_port -d SRX device name or Switch IP -p port on the SRX -t port_type: Downlink

      1. Adding the downlink port of the SRX device to the topology
        admin@controller:~$ jnpr_device_port add -d srx.juniper.net -p ge-0/0/10 -t Downlink
        +---------------+-----------+---------------+
        |     Device    |    port   |   port_type   |
        +---------------+-----------+---------------+
        | 10.107.23.103 | ge-0/0/10 | downlink_port |
        +---------------+-----------+---------------+
        
    6. Creating a VRRP pool

      The L3 plugin supports HA via VRRP. In order to use this functionality, the admin needs to create a VRRP pool. Only one of the devices in the pool needs to be assigned to a tenant using the jnpr_allocate_device command.

      The below example illustrates the procedure to create a VRRP pool:

      Add routers

      admin@controller:~$ jnpr_device add -d 10.20.30.40 -c router -u root -p password
      +-----------------+---------------+-------------+-------+-------+------+---------------+
      |      Device     |       Ip      | Device Type | model | login | vtep | vrrp_priority |
      +-----------------+---------------+-------------+-------+-------+------+---------------+
      | 10.20.30.40     | 10.20.30.40   |    router   |  srx  |  root |  0   |       0       |
      +-----------------+---------------+-------------+-------+-------+------+---------------+
      admin@controller:~$ jnpr_device add -d 10.20.30.41 -c router -u root -p password
      +-----------------+---------------+-------------+-------+-------+------+---------------+
      |      Device     |       Ip      | Device Type | model | login | vtep | vrrp_priority |
      +-----------------+---------------+-------------+-------+-------+------+---------------+
      | 10.20.30.41     | 10.20.30.41   |    router   |  srx  |  root |  0   |       0       |
      +-----------------+---------------+-------------+-------+-------+------+---------------+

      Create VRRP pools

      admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1
      +----------------------------------+-----------------+
      |            Device ID             |  VRRP POOL NAME |
      +----------------------------------+-----------------+
      | 10.20.30.40                      | tenant1_pool1   |
      +----------------------------------+-----------------+
      
      admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1
      +----------------------------------+-----------------+
      |            Device ID             |  VRRP POOL NAME |
      +----------------------------------+-----------------+
      | 10.20.30.41                      | tenant1_pool1   |
      +----------------------------------+-----------------+
      
      admin@controller:~$ jnpr_vrrp_pool list
      +---------------+----------------+
      |   Device ID   | VRRP POOL NAME |
      +---------------+----------------+
      | 10.20.30.40   | tenant1_pool1  |
      | 10.20.30.41   | tenant1_pool1  |
      +---------------+----------------+
      

      Allocate the master router of the VRRP pool to the tenant using jnpr_allocate_device command.

    7. Define allocation of device(s) to a tenant/group of tenants

      jnpr_allocate_device add –t tenant’s_project_id -d hostname/IP of the device being allocated

      admin@controller:~$ jnpr_allocate_device add –t e0d6c7d2e25943c1b4460a4f471c033f –d 10.20.30.40
      +----------------------------------+---------------+
      |            Tenant ID             |   Device IP   |
      +----------------------------------+---------------+
      | e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40   |
      +----------------------------------+---------------+
      

      If a device needs to be used as a default for multiple tenants set the tenant id as 'default'. For example:

      admin@controller:~$ jnpr_allocate_device add –t default –d 10.20.30.40
      +----------------------------------+---------------+
      |            Tenant ID             |   Device IP   |
      +----------------------------------+---------------+
      |             default              | 10.20.30.40   |
      +----------------------------------+---------------+
      

    Configuration

    1. ML2

      Juniper ML2 drivers supports the following virtual network types:

      • VLAN based networks
      • VXLAN based tunneled networks with EVPN (using Hierarchical Port Binding)

      In addition, it supports orchestration of aggregated links connecting the OpenStack nodes to the ToR switches using :

      • LAG
      • MC-LAG
    2. ML2 VLAN
      1. Introduction

        Juniper ML2 VLAN plugin supports configuring VLAN of each tenant’s network on the corresponding switch port attached to the compute node. VM migration is supported from version 2.7.1 onwards.

      2. Supported Devices

        EX and QFX switch families are supported.

      3. Plugin Configuration

        Configure OpenStack to use VLAN type driver. On OpenStack Controller, open the file /etc/neutron/neutron.conf, and update as follows:

        core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin or core_plugin = neutron.plugins.ml2.plugin_pt_ext.Ml2PluginPtExt

        On Openstack Controller, update the ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini and set Juniper’s ML2 plugin as the mechanism driver:

        [ml2]
        type_drivers = vlan
        mechanism_drivers = openvswitch,juniper
        tenant_network_types = vlan

        Additionally, mention the VLAN range and the Physical network alias to be used.

        [ml2_type_vlan]
        network_vlan_ranges = physnet1:1001:1200

        Restart neutron-server to changes take effect.

      4. Verification

        Login to OpenStack UI and create VLAN based network and launch VM’s. You could see the VLAN ID’s of the OpenStack network are created on the switch and mapped to the interfaces configured through the “jnpr_switchport_mapping” command.

    3. ML2 VXLAN with EVPN
      1. Introduction

        ML2 EVPN driver is based on Neutron hierarchical port binding design. It configures the ToR switches as VXLAN endpoints (VTEPs) which is used to extend VLAN based L2 domain across routed networks.

        To provide L2 connectivity between the network ports on the network and compute nodes the L2 packets are Tagged with VLANs and sent to the Top of Rack(TOR) Switch. The VLANs used to tag the packets are only locally significant to the TOR switch (Switch Local VLAN).

        At the ToR switch the Switch Local VLAN are mapped into a global VXLAN ID. The L2 packets are encapsulated in to VXLAN packets and sent to the virtual tunnel endpoint (VTEP) on the destination node, where they are de-encapsulated and sent to the destination VM.

        To make L2 connectivity between the endpoints work with VXLAN each endpoint needs to know about the presence of destination VM and VTEP. EVPN uses a BGP based control plane to learn this information. The plugin assumes that the ToRs are setup with BGP based peering. Refer to Junos documentation for configuring BGP on the ToR switches. BGP Configuration Overview

      2. Supported Devices

        QFX 5100 only

      3. Plugin Configuration

        Install the Juniper neutron plugin on the neutron server node.

        Edit the ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini to add the following configuration for EVPN driver.

        [ml2]
        type_drivers = vlan,vxlan,evpn
        tenant_network_types = vxlan
        mechanism_drivers = jnpr_evpn,openvswitch
        
        [ml2_type_vlan]
        network_vlan_ranges=<ToR_MGMT_IP_SWITCH1>:<vlan-start>:<vlan-end>,
        <ToR_MGMT_IP_SWITCH2>:<vlan-start>:<vlan-end>,<ToR_MGMT_IP_SWITCH3>:
        <vlan-start>:<vlan-end>
        
        [ml2_type_vxlan]
        vni_ranges = <vni-start>:<vni-end>

        Restart the neutron server to load the EVPN ml2 driver.

        Update the plugin topology database

        The following example shows the commands to add a ToR switch and 2 compute nodes connected to it. These commands need to be run on the neutron server node.

        jnpr_device add -d ToR1 -u root -p root_password -c switch
        jnpr_switchport_mapping add -H Compute1 -n eth1 -s ToR1_MGMT_IP -p ge-0/0/1
        jnpr_switchport_mapping add -H Compute2 -n eth1 -s ToR2_MGMT_IP -p ge-0/0/1
        jnpr_switchport_mapping add -H Network1 -n eth1 -s ToR3_MGMT_IP -p ge-0/0/1

        Update OVS L2 agent on all Compute and Network nodes

        All the compute and network nodes needs to be updated with the bridge mapping. The bridge name should be the IP address of the ToR switch.

        Edit the file /etc/neutron/plugins/ml2/ml2_conf.ini (ubuntu) or /etc/neutron/plugins/ml2/openvswitch_agent.ini (centos) to add the following:

        [ovs]
        bridge_mappings = <ToR_MGMT_IP>:br-eth1

        Here br-eth1 is an OVS bridge which enslaves the eth1 physical port on the OpenStack node connected to ToR1. It provides the physical network connectivity for Tenant networks.

        Restart the ovs agent on all the compute and network nodes.

      4. Verification

        Login to OpenStack UI and create a virtual network and launch VM’s. You could see the VXLAN ID’s of the OpenStack network, while switch local VLANs are created on the switch and mapped to the VXLAN ID on each ToR.

    4. Using ML2 driver with Link Aggregation (LAG)
      1. Introduction

        LAG can be used between a Compute node and a Juniper Switch to improve network resiliency.

      2. Plugin Configuration

        The above figure describes the connectivity of OpenStack node to the ToR switch. To configure LAG on Juniper ToR switches please refer to the following link:

        Configuring Aggregated Ethernet Links (CLI Procedure)

        1. We need to configure LAG on OpenStack Compute. In this example we are using eth1 and eth2 for LAG. These two ports are data ports of the OpenStack Networking. These ports will be connected to LAG interface on the Juniper Switches.
          #ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active

          Create NIC Mapping:

          jnpr_nic_mapping add –H openstack node’s name or ip –b physnet1 -n nic
          jnpr_nic_mapping add –H openstack node’s name or ip –b physnet1 -n nic
          jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1
          jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
        1. Add switch port mapping with aggregate interface. This need to be executed on the Openstack controller
        2. jnpr_switchport_mapping add –H openstack compute name or IP –n eth1 -s switch name or IP –p port –a lag-name
          jnpr_switchport_mapping add –H openstack compute name or IP –n eth2 -s switch name or IP –p port –a lag-name
          jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -s dc-nm-qfx3500-b -p ge-0/0/2 -a ae0
          jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -s dc-nm-qfx3500-b -p ge-0/0/3 -a ae0
        3. List and verify switch port mapping with aggregate details.
          # jnpr_switchport_mapping list
          +---------------+--------+-----------------+-----------+-----------+
          | Host          | Nic    | Switch          | Port      | Aggregate |
          +---------------+--------+-----------------+-----------+-----------+
          | 10.207.67.144 | eth1   | dc-nm-qfx3500-b | ge-0/0/2  | ae0       |
          | 10.207.67.144 | eth2   | dc-nm-qfx3500-b | ge-0/0/3  | ae0       |
          +---------------+--------+-----------------+-----------+-----------+
          
    5. Using ML2 Driver with MC-LAG (Multi-Chassis Link Aggregation)
      1. Introduction

        This area covers how Multi-Chassis LAG can be configured and used with Juniper Neutron Plugins.

      2. Plugin Configuration

        To configure MC-LAG on juniper switches, refer to the link below:

        Configuring Multichassis Link Aggregation on EX Series Switches

        We need to configure LAG on Openstack Compute.

        1. In this example we are using eth1 and eth2 for LAG . These two ports are data ports of the Openstack Networking. These ports will be connected to LAG interface on two different Juniper Switches.
          #ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active
        2. Create NIC Mapping:
          jnpr_nic_mapping add –H openstack node’s name or ip –b physnet1 -n nic
          jnpr_nic_mapping add –H openstack node’s name or ip –b physnet1 -n nic
          jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1
          jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
        3. Add switch port mapping with aggregate interface. This need to be executed on the OpenStack controller
          jnpr_switchport_mapping add –H openstack compute name or IP –n eth1 -s switch name or IP –p port –a lag-name
          jnpr_switchport_mapping add –H openstack compute name or IP –n eth2 -s switch name or IP –p port –a lag-name
          jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -s dc-nm-qfx3500-b -p ge-0/0/2 -a ae0
          jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -s dc-nm-qfx3500-b -p ge-0/0/3 -a ae0
        4. List and verify switch port mapping with aggregate details.
          # jnpr_switchport_mapping list
          +---------------+--------+-----------------+-----------+-----------+
          | Host          | Nic    | Switch          | Port      | Aggregate |
          +---------------+--------+-----------------+-----------+-----------+
          | 10.207.67.144 | eth1   | dc-nm-qfx3500-a | ge-0/0/2  | ae0       |
          | 10.207.67.144 | eth2   | dc-nm-qfx3500-b | ge-0/0/3  | ae0       |
          +---------------+--------+-----------------+-----------+-----------+
          
    6. L3 Plugin Configuration
      1. Introduction

        Juniper L3 plugin supports the following features:

        • L3 Routing
        • Adding Static Routes
        • Provides router HA via VRRP
        • Routed External Networks
      2. Supported Devices

        EX and QFX switch families, SRX and vSRX device families are supported.

      3. Plugin Configuration

        Update the configuration file /etc/neutron/neutron.conf with below value:

        [DEFAULT]
        service_plugins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin

        Restart neutron-server to changes take effect.

      4. Verification

        Login to OpenStack UI and create some networks and assign them to a router after creating it. On the device configured as router, you should see a routing instance created and RVI corresponding to the networks assigned to the RI.

    7. OpenStack Extension for Static Routes with Preference
      1. Introduction

        Static route extension provides Horizon dashboard and Neutron REST API for configuring static routes with preference. The horizon dashboard will be available at Project > Network > Routers > Static Routes.

      2. Supported Devices

        EX and QFX, SRX and vSRX device familes

      3. Configuring the Static Route Extension
        1. Update the Neutron configuration file /etc/neutron/neutron.conf
          service plugins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin
        2. Run the migrate_staticroutes script, this will update the neutron database for the enhanced static route database.
          python /usr/lib/python2.7/site-packages/neutron/common/juniper/migrate_staticroutes.py
        3. Enable Static Route dashboard
          1. CentOS

            cp/usr/lib/python2.7/site-packages/juniper_horizon_static_route/

            openstack_dasboard/enabled/_1441_project_routers_panel.py

            /usr/share/openstack-dashboard/openstack_dashboard/enabled/

          2. Ubuntu

            cp/usr/lib/python2.7/dist-packages/juniper_horizon_static_route

            /openstack_dashboard/enabled/_1441_project_routers_panel.py

            /usr/share/openstack-dashboard/openstack_dashboard/enabled/

        4. Restart Neutron and Horizon services

          Neutron-Server

          1. Ubuntu : service neutron-server restart
          2. Centos : systemctl restart neutron-server

          Apache (restarts Horizon)

          1. Ubuntu : service apache2 restart
          2. CentOS : systemctl restart httpd
      4. Verification

        From the OpenStack Dashboard, static routes can be added and deleted by OpenStack tenant as shown in the following screenshots.

        Figure 1: View Static routes

        View Static routes

        Figure 2: Add a Static route

        Add a Static route

        Figure 3: Delete Static Routes

        Delete Static Routes
    8. FwaaS Plugin
      1. Introduction

        Juniper's Firewall-as-a-Service (FWaaS) plugin builds on top of Juniper’s ML2 and L3 plugins. It enables Neutron to configure firewall rules and policies on SRX/vSRX devices. In OpenStack, a tenant can create a firewall assign a security policy to it. A security policy is a collection of firewall rules. The below pictures illustrates this relationship:

        Firewall Rule: Defines the source address and port(s), destination address and port(s), protocol and the action to be taken on the matching traffic.

        Firewall Policy: is a collection of firewall rules.

        Firewall: The construct representing a firewall device.

        When FwaaS plugin is enabled it is expected that the SRX/vSRX acts as both a router as well as a firewall. The admin needs to ensure this while setting up his topology.

      2. Supported Devices

        SRX and vSRX device familes

      3. Plugin Configuration

        Before proceeding further, ensure that the following pre-requisites have been taken care of:

        • Topology setup is done
          • devices got added to jnpr_devices table
          • compute nic → physical network alias mapping is added to jnpr_nic_mapping table.
          • Compute → Switch connectivity is captured in jnpr_switchport_mapping table (needed for L2 VLAN orchestration)
        • L2 plugin is setup (Optional if using a 3rd Party ML2 plugin)
        • L3 plugin is setup to use the SRX/vSRX as the router.
        • Step 1: Configure Neutron to use Juniper’s FwaaS service plugin

          Update the Neutron configuration file /etc/neutron/neutron.conf file and append service_plugins with the following:

          service_plugins =
           neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin, 
          neutron_fwaas.services.juniper_fwaas.dmi.fwaas.JuniperFwaaS
        • Step 2: Add the firewall to the topology

          jnpr_device add -d dns_name_OR_IP_address_of_the_device -c firewall -u root_user -p root_password

        • Step 3: Define the downlink trunk port on the SRX device on which the RVIs are created by the plugin.

          Update the plugin database with the port on the SRX device to which the Aggregation Switch is connected.

          jnpr_device_port -d SRX device name or Switch IP -p port on the SRX -t port_type

          For example: jnpr_device_port add -d srx1 –p ge-0/0/1 –t Downlink

        • Step 4: Allocate the firewall to a tenant or as a default for all tenants. This concept will be covered in the next section.

          jnpr_allocate_device add -t project_id -d SRX/vSRX ip

          To allocate the firewall as a default to all the tenants who don’t have a firewall allocated to them use the below command:

          jnpr_allocate_device add -t default -d SRX/vSRX ip

        • Step 5: Enable Horizon to show Firewall panel

          To display the Firewall panel under the Networks group in the Horizon user interface, open /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py, and update the following configuration:

          enable_firewall: True

        After you have completed the FWaaS plugin configurations, restart the following:

        • Neutron-Server
          • Ubuntu : service neutron-server restart
          • CentOS : systemctl restart neutron-server
        • Apache (restarts Horizon)
          • Ubuntu : service apache2 restart
          • CentOS : systemctl restart httpd
      4. Verification

        From the Horizon UI create some firewall rules and associate them to a policy. Then create a firewall and assign the routers and the firewall policy to it. On the SRX, you should see a firewall zone created for each RI and the corresponding policies pushed within the zones.

    9. Dedicated Perimeter Firewall
      1. Introduction

        Tenants may have different requirements with regards to performance and cost. Some tenants may require dedicated firewalls for better performance and compliance whereas others may prefer lower cost solution enabled by sharing network resources. There can be scenarios where a tenant requires full administrative access to his networking device so as to leverage the advanced services provided by the device. The above factors require the cloud provider to have the ability to allocate dedicated/shared network resources to the tenants.

        Juniper’s FwaaS plugin addresses this problem and enables a service provider to allocate dedicated/shared resources (physical/virtual) to his tenants. This feature opens the gates for a service provider to start creating flavors of various network offerings for his tenants.

        As an example, a service provider can start creating various flavors as mentioned below:

        • Economy : allocate a shared SRX/vSRX for a group of tenants
        • Silver : allocate dedicated SRX/vSRX per tenant with default specifications
        • Gold : allocate high-end SRX or vSRX

        As seen in the above picture, an admin can dedicate SRX/vSRX to a tenant/group of tenants. This procedure is transparent to the tenant and is done using the supplied CLI tools along with Juniper’s neutron plugin.

        Let’s take a scenario where a tenant requires a dedicated SRX Cluster. The steps required to use this feature are as follows:

        1. Allocate the master SRX to the tenant using the command:

          jnpr_allocate_device add -t tenant’s_project_id -d hostname/IP of the device being allocated

          admin@controller:~$ jnpr_allocate_device add –t e0d6c7d2e25943c1b4460a4f471c033f –d 10.20.30.40
          +----------------------------------+---------------+
          |            Tenant ID             |   Device IP   |
          +----------------------------------+---------------+
          | e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40   |
          +----------------------------------+---------------+
          
        2. Define the VRRP cluster and assign it a name.

          jnpr_vrrp_pool add -d hostname/ip of device -p pool name to be assigned

          admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1
          +----------------------------------+-----------------+
          |            Device ID             |   VRRP POOL NAME|
          +----------------------------------+-----------------+
          | 10.20.30.40                      | tenant1_pool1   |
          +----------------------------------+-----------------+
          
          admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1
          +----------------------------------+-----------------+
          |            Device ID             |   VRRP POOL NAME|
          +----------------------------------+-----------------+
          | 10.20.30.40                      | tenant1_pool1    |
          +----------------------------------+------------------+
          
          admin@controller:~$ jnpr_vrrp_pool list
          +---------------+----------------+
          |   Device ID   | VRRP POOL NAME |
          +---------------+----------------+
          | 10.20.30.40   | tenant1_pool1  |
          | 10.20.30.41   | tenant1_pool1  |
          +---------------+----------------+
          
    10. HA with VRRP
      1. Introduction

        FwaaS plugin supports HA via VRRP. In order to use this functionality, the admin needs to create a VRRP pool. Only one of the devices in the pool needs to be assigned to a tenant using the jnpr_allocate_device command.

        The below example illustrates the procedure to create a VRRP pool:

        admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1
        +----------------------------------+-----------------+
        |            Device ID             |   VRRP POOL NAME|
        +----------------------------------+-----------------+
        | 10.20.30.40                      | tenant1_pool1   |
        +----------------------------------+-----------------+
        
        admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1
        +----------------------------------+-----------------+
        |            Device ID             |   VRRP POOL NAME|
        +----------------------------------+-----------------+
        | 10.20.30.41                      | tenant1_pool1   |
        +----------------------------------+-----------------+
        
        admin@controller:~$ jnpr_vrrp_pool list
        +---------------+----------------+
        |   Device ID   | VRRP POOL NAME |
        +---------------+----------------+
        | 10.20.30.40   | tenant1_pool1  |
        | 10.20.30.41   | tenant1_pool1  |
        +---------------+----------------+
        

        Allocate the master SRX of the VRRP pool to the tenant using the command:

        jnpr_allocate_device add –t tenant’s_project_id -d hostname/IP of the device being allocated

        admin@controller:~$ jnpr_allocate_device add –t e0d6c7d2e25943c1b4460a4f471c033f –d 10.20.30.40
        +----------------------------------+---------------+
        |            Tenant ID             |   Device IP   |
        +----------------------------------+---------------+
        | e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40   |
        +----------------------------------+---------------+
        
    11. OpenStack Extension for Physical Topology
      1. Introduction

        Physical Topology extension provides dashboard for OpenStack admin to manage physical network connections, i.e., Host NIC to Switch Port mapping. The Physical topology API exposes these physical network connections.

        Juniper neutron plugin currently manages topology information via jnpr_switchport_mapping cli.

        admin@controller:~$ jnpr_switchport_mapping list
        +---------------+------+---------------+-----------+-----------+
        |      Host     | Nic  |     Switch    |    Port   | Aggregate |
        +---------------+------+---------------+-----------+-----------+
        | 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 |           |
        | 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 |           |
        | 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 |           |
        | 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 |           |
        | 10.108.10.100 | eth1 | 10.107.52.137 |  ge/0/0/5 |           |
        +---------------+------+---------------+-----------+-----------+
        
      2. Plugin Configuration

        To configure the Physical Topology Extension:

        1. 1. On OpenStack Controller, update the ML2 configuration file etc/neutron/plugins/ml2/ml2_conf.ini as follows. (If this file is updated to enable EVPN driver for VXLAN, skip this step)
          [ml2]
          type_drivers = vlan
          tenant_network_types = vxlan
          mechanism_drivers = openvswitch,juniper
        2. Update the Neutron configuration file /etc/neutron/neutron.conf:

          core_plugin = neutron.plugins.ml2.plugin_pt_ext.Ml2PluginPtExt

        3. Enable Physical topology dashboard
          1. CentOS

            cp /usr/lib/python2.7/site-packages/juniper_horizon_physical-topology/

            openstack_dasboard/enabled/_2102_admin_topology_panel.py

            /usr/share/openstack-dashboard/openstack_dashboard/enabled/

          2. Ubuntu

            cp /usr/lib/python2.7/dist-packages/juniper_horizon_physical-topology/

            openstack_dasboard/enabled/_2102_admin_topology_panel.py

            /usr/share/openstack-dashboard/openstack_dashboard/enabled/

        4. Restart Neutron and Horizon services:
          • Neutron-Server
            1. Ubuntu : service neutron-server restart
            2. CentOS : systemctl restart neutron-server
          • Apache (restarts Horizon)
            1. Ubuntu : service apache2 restart
            2. CentOS : systemctl restart httpd
      3. Verification

        After the installation the physical topology dashboard will be available at Admin > System > Physical Networks.

        From the Physical Networks dashboard, admin can view the topology connections, edit them, add them and delete them as shown in the following screenshots.

        Figure 4: View Physical topologies

        View Physical topologies

        Figure 5: Add physical topology

        Add physical topology

        Figure 6: Add Topology from LLDP

        Add Topology from LLDP

        Figure 7: Edit physical topology

        Edit physical topology

        Figure 8: Delete a physical topology

        Delete a physical topology

        Figure 9: Delete multiple physical topologies

        Delete multiple physical topologies
    12. VPNass Plugin
      1. Introduction

        Juniper’s VPN-as-a-Service (VPNaaS) builds on top of Juniper’s L3 and FWaaS plugins. Use the plugin to configure Site-to-Site VPN on SRX and vSRX devices.

      2. Supported Devices

        SRX and vSRX device familes

      3. Plugin Configuration

        Before proceeding further, ensure that the following pre-requisites have been taken care of:

        • Topology setup is complete:
          • Devices have been added to jnpr_devices table
          • Compute NIC – Physical network alias mapping is added to jnpr_nic_mapping table.
          • Compute–Switch connectivity is captured in jnpr_switchport_mapping table (needed for L2 VLAN orchestration)
        • L2 plugin is setup (Optional if using a 3rd Party ML2 plugin)
        • L3 plugin is setup to use the SRX/vSRX as the router
        • FwaaS plugin is setup [optional]
        • Step 1: Configure Neutron to use Juniper’s VPNaaS service plugin

          Update the Neutron configuration file /etc/neutron/neutron.conf file and append service_plugins with the following:

          service_plugins =
           neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin, 
          neutron_fwaas.services.juniper_fwaas.dmi.fwaas.JuniperFwaaS, neutron_vpnaas.services.vpn.juniper.vpnaas.JuniperVPNaaS

          The below steps are optional if FWaaS plugin is already configured.

        • Step 2: Add the firewall to the topology

          jnpr_device add -d device_name -c firewall -u root_user -p root_password

        • Step 3: Define the downlink trunk port on the SRX device on which the RVIs are created by the plugin.

          Update the plugin database with the port on the SRX device to which the Aggregation Switch is connected.

          jnpr_device_port -d SRX device name or Switch IP -p port on the SRX -t port_type

          For example: jnpr_device_port add -d srx1 –p ge-0/0/1 –t Downlink

        • Step 4: Allocate the firewall to a tenant or as a default for all tenants. This concept will be covered in the next section.

          jnpr_allocate_device add -t project_id -d SRX/vSRX ip

          To allocate the firewall as a default to all the tenants who don’t have a firewall allocated to them use the below command:

          jnpr_allocate_device add -t default -d SRX/vSRX ip

        After you have completed the FWaaS plugin configurations, restart the following:

        • Neutron-Server
          • Ubuntu : service neutron-server restart
          • CentOS : systemctl restart neutron-server
        • Apache (restarts Horizon)
          • Ubuntu : service apache2 restart
          • CentOS : systemctl restart httpd
      4. Unsupported features in 3.0 Release

        The following features of VPNaaS either partially supported or not supported in 3.0 release and will be taken care in a future release.

        • Only Dead-Peer-Detection (DPD) disable action is supported. The other actions are not supported.
        • DPD interval is supported and DPD is always set to optimized and threshold always gets set to 5. DPD timeout is not supported.
        • Initiator state is always set to bidirectional.
        • Admin state transitions are not supported. Its advised to delete and recreate when needed.
        • SRX/vSRX support only IPSEC Tunnel mode.
      5. Verification

        From the Horizon UI create a VPN IPSEC site connection along with its associated IKE, IPSEC and VPNService components. You should see the corresponding IKE, IPSEC, IKE GW configurations being pushed to the SRX/vSRX.

    13. VXLAN L3 Routing with EVPN
      1. Introduction

        The VXLAN EVPN ML2 plugin from Juniper Networks uses VXLAN tunnels along with Neutron hierarchal port binding design to provide L2 networks in OpenStack.

        The default L3 service plugin in OpenStack implements virtual router using Linux network namespaces.

        This release of Neutron Plugins from Juniper Networks, adds support for L3 routing for VXLAN networks. This is done by creating a VTEP on MX and QFX10000 series devices to convert the VXLAN to VLAN based network and configuring Routing Instances to route packets between these VLANs.

        This feature works in conjunction with Junipers VXLAN EVPN ML2 plugin, while the VXLAN EVPN ML2 plugin is used to provide L2 connectivity, the VXLAN EVPN L3 service plugin provides L3 routing between the VXLAN based virtual networks.

        Figure 10: BGP Peering between all the TORs and Router

        BGP Peering between all the TORs and Router
      2. Supported Devices

        The VXLAN EVPN L3 service plugin is can orchestrate MX and QFX10K devices to provide L3 based routing between VLAN based Networks.

        EVPN is supported on version 14.2R6 on MX and QFX10K.

      3. Plugin configuration

        The EVPN L3 plugin depends on the VXLAN EVPN ML2 plugin for orchestration of layer 2 ports. Configuration of ML2 VXLAN EVPN plugin is a prerequisite for this plugin. Please refer to section “ML2 VXLAN with EVPN” to configure the ML2 VXLAN EVPN plugin.

        1. To configure the VXLAN EVPN L3 edit the /etc/neutron/neutron.conf file and update the service plugin definition as follows
          [DEFAULT]
          …
          service_plugins = neutron.services.juniper_l3_router.evpn.l3.JuniperL3Plugin
          …
          
        2. Use the jnpr_device CLI to add the QFX10K or MX device as a physical router to the plugins topology database
          Jnpr_device add -d QFX10K/MX IP -c router -u root -p root_password -t vtep_ip
        3. Update the VLAN allocation range in /etc/neutron/plugins/ml2/ml2_conf.ini and add the per device VLAN range for the physical router. This can be done by adding the routing device’s IP followed by the VLAN range as shown in the example below:
          [ml2_type_vlan]                                                                                                                                                                                                 
          network_vlan_ranges = ToR_MGMT_IP_SWITCH1>:vlan-start:vlan-end  …,…,MGMT_IP_ROUTER:vlan-start:vlan-end 

          Also make sure that the VXLAN range is configured with the correct VNI range

          [ml2_type_vxlan]                                                                                                                                                                                                  
          tunnel_id_ranges = vxlan-start:vxlan-end
      4. Verification

        To verify that the EVPN L3 plugin is functioning properly, restart your neutron server and create a virtual router using the Horizon dashboard or CLI, this should create a Routing Instance on the configured routing device (QFX10K/MX).

    14. EVPN Multi-homing
      1. Introduction

        To achieve network redundancy and load balancing, OpenStack nodes can be connected to more than one leaf switches capable of VXLAN-EVPN network. Juniper VXLAN-EVPN plugin provisions the Multi-homed peer devices with an identical (ESI) Ethernet Segment Identification number and identical VLAN, VNI encapsulation details. This enables EVPN multi-homing functionality on the device.

        OpenStack nodes can utilize all the multi-homed uplinks to send traffic. This would provide load balancing and redundancy in case of any failures. The uplink interface must be an aggregated interface.

        Figure 11: EVPN Multi-homing

        EVPN Multi-homing

        When more than one device connection is added for a particular OpenStack node via “jnpr_switchport_mapping” cli cmd, the node is assumed to be multi-homed. The interface must be an Aggregated Ethernet interface. This triggers an ESI ID generation and configures it to the aggregated switch interfaces.

      2. Supported Devices and JUNOS Version

        Configuration of ML2 VXLAN EVPN plugin is a prerequisite for this plugin. Please refer to section “ML2 VXLAN with EVPN” to configure the ML2 VXLAN EVPN plugin.

        Additionally, the “jnpr_switchport_mapping” cmd produces the required physical topology name (derived from ESI ID) and Bridge mapping details based on the topology inputs. This has to be updated in the Open vSwitch Agent configuration file of the OpenStack nodes the switch is connected to.

        admin@controller:~$ jnpr_switchport_mapping add -H 10.206.44.116 -n eth3 -s 10.206.44.50 -p ae2
        +---------------+------+--------------+------+-----------+
        |      Host     | Nic  |    Switch    | Port | Aggregate |
        +---------------+------+--------------+------+-----------+
        | 10.206.44.116 | eth3 | 10.206.44.50 | ae2  |           |
        +---------------+------+--------------+------+-----------+
        =============================================================
        If you are using evpn driver, please update ovs l2 agent
        config file /etc/neutron/plugins/ml2/openvswitch_agent.ini on
        node 10.206.44.116 with bridge_mappings = 00000000010206044116:br-eth1
        

        Also the physical_topology name with VLAN ranges need to be updated in the neutron ml2 plugin configuration file ml2_conf.ini. as below.

        [ml2]
        type_drivers = flat,vlan,vxlan,vxlan_evpn
        tenant_network_types = vxlan_evpn
        mechanism_drivers = jnpr_vxlan_evpn,openvswitch
        #extension_drivers = port_security
        
        [ml2_type_vlan]
        network_vlan_ranges=10.206.44.50:10:1000,00000000010206044116:10:1000,10.206.44.56:10:1000
        
        [ml2_type_vxlan]
        vni_ranges = 10:5000
        
      3. Verification

        To verify that the EVPN multi-homing plugin is functioning properly, restart your neutron server and create networks and VM’s associated to the networks. The multi-homed VM’s are reachable when a redundant link is brought down.

    15. EVPN BMS support
      1. Introduction

        Juniper VXLAN-EVPN plugin supports integration of Bare Metal Server (BMS) into VXLAN-EVPN network. Bare Metal Server would be able to communicate with OpenStack VM’s when connected through OpenStack network.

        Juniper plugin supports integration of Bare Metal Server into OpenStack network. This provides accessibility to traditional physical boxes from the OpenStack VM. Based on the plugin configuration, BMS can be integrated into VLAN, VXLAN-EVPN network.

        Figure 12: EVPN BMS Support

        EVPN BMS Support
      2. Supported Devices and JUNOS Version

        Juniper EVPN BMS functionality is supported on QFX5100 leaf device with version 14.1X53-D40.

      3. Plugin configuration

        Configuration of ML2 VXLAN EVPN plugin is a prerequisite for this plugin. Please refer to section “ML2 VXLAN with EVPN” to configure the ML2 VXLAN EVPN plugin.

        The BMS needs to be connected to the Leaf device. In the OpenStack Horizon UI, select the OpenStack network to be linked with the BMS and provide the switch interface details. This provisions the necessary VLAN configuration on the device interface to establish connection to the OpenStack network and BMS.

        By providing the BMS MAC address, an IP address is allocated by the OpenStack DHCP server. By enabling DHCP client on BMS interface, allocated IP Address can be obtained from OpenStack.

        Figure 13: EVPN BMS Plugin Configuration

        EVPN BMS Plugin Configuration

        Enter the following commands to enable BMS UI on the Horizon dashboard:

        • On Ubuntu:
          usr/lib/python2.7/dist-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py 
          /usr/share/openstack-dashboard/openstack_dashboard/enabled/
          sudo cp /usr/lib/python2.7/dist-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py 
          /usr/share/openstack-dashboard/openstack_dashboard/enabled/
        • On Centos:
          cp /usr/lib/python2.7/site-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py 
          /usr/share/openstack-dashboard/openstack_dashboard/enabled/
          sudo cp /usr/lib/python2.7/site-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py 
          /usr/share/openstack-dashboard/openstack_dashboard/enabled/

        Restart the Horizon dashboard.

        On the Horizon dashboard, a new Tab ‘Juniper->L2 Gateway’ is displayed as below when the user logs in as admin.

        Figure 14: EVPN BMS Plugin Configuration

        EVPN BMS Plugin Configuration

        Users can create a L2 Gateway by specifying the Switch IP address (the switch must be present in the topology), Interface connecting the BMS, the OpenStack network it has to be connected to, and optionally the BMS MAC address (required for dhcp to allocate IP address).

      4. Verification

        To verify, check whether the bare metal server is able to reach the OpenStack guest VM’s.

    16. Network Address Translation
      1. Introduction

        Network Address Translation (NAT) is a process for modifying the source or destination addresses in the headers of an IP packet while the packet is in transit. In general, the sender and receiver applications are not aware that the IP packets are being manipulated.

        In OpenStack, external network provides Internet access for instances. By default, this network only allows Internet access “from” instances using Source Network Address Translation (SNAT). In SNAT, the NAT router modifies the IP address of the sender in IP packets. SNAT is commonly used to enable hosts with private addresses to communicate with servers on the public Internet.

        OpenStack enables Internet access “to” an Instance using Floating IPs. Floating IPs are not allocated to instances by default. Cloud users need to explicitly “grab” them from the pool configured by the OpenStack administrator and then attach them to their instances. Floating IP is implemented by Destination Network Address Translation (DNAT). In DNAT, the NAT router modifies the IP address of the destination in IP headers.

        1. SNAT – Internet Access from VM

          The following figure describes an OpenStack instance accessing Internet.

          Figure 15: Source Network Address Translation - Internet Access from a Virtual Machine

          Source Network Address Translation - Internet Access
from a Virtual Machine

          To enable Internet access from VMs, the network to which VM is connected should be connected to a router. This router must have its gateway set to the external network created by the administrator. Juniper’s Neutron plugin configures SNAT on the router to modify source address of all the VMs to the address of the interface that is connected to external network.

        2. DNAT – Internet Access to VM

          The following figure describes an OpenStack instance being accessed from Internet.

          Figure 16: Destination Network Address Translation – Internet Access to a Virtual Machine

          Destination Network Address Translation – Internet
Access to a Virtual Machine

          To enable Internet access to VMs, a floating IP is allocated to the VM from a pool of IP addresses allocated to the tenant by administrator. The floating IP should be a routable IP. Juniper’s Neutron plugin configures the external facing interface of the router to proxy ARP for this IP address and DNATs for floating IP of the VM.

      2. Plugin Configuration
        1. 1. Update the Neutron configuration file /etc/neutron/neutron.conf as follows:
          service plugins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin
        2. Run the migrate_staticroutes script:
          python /usr/lib/python2.7/site-packages/neutron/common/migrate_staticroutes.py 
        3. Restart the Neutron service as follows:
          service neutron-server restart
      3. Verification
        1. Configuring External Network Access

          To configure an external network access:

          1. Create a network.
          2. Launch an instance on the network created.
          3. Create a router.
          4. Add the new network to the router.
          5. Set gateway of the router to the external network.
          6. Ping from VM to any IP in the external network.
        2. Configuring access to VM from External Network

          To configure access to a VM from an external network:

          1. Associate a floating IP from the floating IP pool of the external network to the instance that is created.
          2. Configure security rules in Security group to allow traffic from the external network, for instance ICMP ALLOW ALL for both ingress and egress traffic.
          3. You can now access the VM through the floating IP
    17. Installation Scripts for Juniper Plugin
      1. Introduction

        Installation script for Juniper OpenStack extensions is a self-descriptive interactive tool that walks through the installation of Juniper OpenStack Plugins.

        Before running the installation script, ensure the following preconditions are met:

        1. Password-less SSH authentication has been enabled between controller and all other nodes of OpenStack
        2. keystonerc_admin is present in home directory
        3. Installation should be done from controller, i.e., the node on which neutron server is running

        Currently there are 7 packages.

        1. Horizon physical topology plugin
          • Provides Physical Networks dashboard for Administrator
        2. Horizon static route plugin
          • Provides Static Route with preference dashboard
        3. Horizon BMS plugin
          • Provides BMS dashboard
        4. Neutron plugin
          • Provides Neutron ML2 extension and service plugins
        5. Neutron FWaaS plugin
          • OpenStack Neutron plugin for VPNaaS. It supports both SRX and vSRX devices.
        6. Neutron VPNaaS plugin
          • OpenStack Neutron plugin for VPNaaS. It supports both SRX and vSRX devices.
        7. Neutronclient plugin
          • Provides neutron CLI for Physical topology

        These 7 packages can be classified into following 3 categories based on the functionality of the package.

        1. Neutron server plugin package
          1. Neutron plugin
          2. Neutron FWaaS plugin
          3. Neutron VPNaaS plugin
        2. User Interface packages
          1. Horizon extensions
          2. CLI extensions

        Server plugin package are installed on OpenStack controller node where the Neutron server runs. User Interface packages are installed on the node running Horizon, while the CLI package can be installed on any node where neutron-client is installed.

        The script provided will prompt for required information and install the plugins in appropriate nodes.

    Limitations and Assumptions

    • Not tested in nested virtualization environment
    • Users configuring networks in parallel from OpenStack may result to failures

    Troubleshooting

    The Juniper plugin logs are available on Network node under the folder /var/log/neutron/juniper*.

    Referenced Documents

    Table 2: References

    Bookmark

    Title / Author / Link

    Revision

    Release note

    Juniper Networks Plug-In for OpenStack Neutron

     

    Glossary

    Table 3: Terminologies and Acronyms

    Term

    Definition

    ML2

    Neutron Modular plugin 2

    Modified: 2016-12-29