Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Installing Contrail with VMware vCenter

    Overview: Integrating Contrail with vCenter Server

    Starting with Contrail Release 2.20, it is possible to install Contrail to work with the VMware vCenter Server in various vSphere environments.

    This topic describes how to install and provision Contrail Release 2.20 and later so that it works with existing or already provisioned vSphere deployments that use VMware vCenter as the main orchestrator.

    The Contrail VMware vCenter solution is comprised of the following two main components:

    1. Control and management that runs the following components as needed per Contrail system:
      1. A VMware vCenter Server independent installation, that is not managed by Juniper Contrail. The Contrail software provisions vCenter with Contrail components and creates entities required to run Contrail.
      2. The Contrail controller, including the configuration nodes, control nodes, analytics, database, and Web UI, which is installed, provisioned, and managed by Contrail software.
      3. A VMware vCenter plugin provided with Contrail that typically resides on the Contrail configuration node.
    2. VMware ESXi virtualization platforms forming the compute cluster, with Contrail data plane (vRouter) components running inside an Ubuntu-based virtual machine. The virtual machine, named ContrailVM, forms the compute personality while performing Contrail installs. The ContralVM is set up and provisioned by Contrail. There is one ContrailVM running on each ESXi host.

    The following figure shows various components of the Contrail VMware vCenter solution.

    Figure 1: Contrail VMware vCenter Solution

    Contrail VMware vCenter Solution

    Installation of a Contrail Integration with VMware vCenter

    This section lists the basic installation procedure and the assumptions and prerequisites necessary before starting the installation of any VMware vCenter Contrail integration.

    Installation: Assumptions and Prerequisites

    The following assumptions and prerequisites are required for a successful installation of a VMware vCenter Contrail integrated system.

    1. VMware vCenter Server version 5.5 is installed and running on Windows.
    2. A cluster of ESXis, of VMware version 5.5, is managed by vCenter.
    3. The recommended hardware and virtual machines to run Contrail controller are available. The recommended minimum for a high availability capable deployment is three nodes.
    4. The software installation packages are downloaded:

       a) “.deb” file of Contrail install packages

       b) VMDK image of ContrailVM

    5. Because Contrail vRouter runs as a virtual machine on each of the ESXi hosts, it needs an IP address assigned from the same underlay network as the host, all of which must be specified appropriately in the testbed.py file.

    Basic Installation Steps

    Before beginning the installation, familiarize yourself with the following basic installation steps for vCenter with Contrail.

    1. Spawn a ContrailVM on every ESXi compute host. Set up each ESXi ContrailVM with the resources needed prior to Contrail installation.
    2. Install Contrail with all necessary roles defined.
    3. Install the vCenter plugin on the Contrail config nodes.
    4. Provision the Contrail controller nodes and the vCenter plugin.
    5. Provision vCenter with the necessary state (entities), to plumb the data path in the ESXi hosts to operate in the manner necessary for the Contrail integration.

    Software Images Distributed for Installation

    The following are the Contrail software image types required for installing a VMware vCenter Server Contrail integrated system:

    1. Debian *.deb package for Contrail installation components
    2. Contrail Virtual Machine Disk (VMDK) image for the ContrailVM
    3. Debian *.deb package for the Contrail vCenter plugin

    Preparing the Installation Environment

    Use the standard Contrail installation procedure to install Contrail on one of the target boxes or servers, so that Fabric (fab) scripts can be used to install and provision the entire cluster.

    Follow the steps in the Installing Contrail Packages for CentOS or Ubuntu section in Installing the Contrail Packages, Part One (CentOS or Ubuntu)

    Note: The fab scripts require a file named testbed.py, that holds all of the key attributes for the provisioning, including the IP addresses of the Contrail roles. Ensure that the testbed.py file is updated with the correct parameters for your system, including parameters to define the vCenter participation. Refer to the sample testbed.py file for Contrail vCenter provided in this topic.

    At the end of the procedure, ensure that the Contrail install Debian package is available on all of the controller nodes.

    If your system has multiple controller nodes, you might also need to run the following command.

    fab install_pkg_all:<Contrail deb package>

    Installing the Contrail for vCenter Components

    Use the steps in this section to install the Contrail for vCenter components. Refer to the sample testbed.py file for Contrail vCenter for specific examples.

    Step 1: Ensure all information in the esxi_hosts section of the testbed.py file is accurate.

    The esxi_hosts = { } section of the testbed.py file spawns the ContrailVM from the bundled VMDK file.

    Ensure all required information in the section is specific to your environment and the VMDK file can be accessed by the machine running the fab task.

    If the IP address and the corresponding MAC address of the Contrail VM are statically mapped in the DHCP server, specify the static IP address in the host field and the MAC address in the mac field in the contrail_vm subsection.

    Provision the ESXi hosts using the following command:

    fab prov_esxi:<contrail deb package path>

    When finished, ping each of the Contrail VMs to make sure they respond.

    Step 2:Ensure the IP address(es) for controller and compute correctly point to the controller nodes and Contrail VMs(on ESXis).

    Specify the orchestrator to be vCenter for proper provisioning of vCenter related components, as in the following:

    env.orchestrator = 'vcenter'

    Run:

    fab setup_vcenter

    When finished, verify that you can see the ESXIs and contrailVMs on the vCenter user interface. This step also creates the DVSwitch and the required port groups for proper functioning with Contrail components.

    Step 3: Install the vCenter plugin on all of the Contrail config nodes:

    fab install_contrail_vcenter_plugin:<contrail-vcenter-plugin deb package>

    Step 4: Install the Contrail components into the desired roles on the specified nodes, using the following command:

    fab install_contrail

    SR-IOV-Passthrough-for-Networking Setup

    If you are using an SR-IOV-Passthrough-for-Networking device with your Contrail setup, one additional change is necessary.

    In the testbed.py file, an optional parameter uplink is provided under the section ContrailVM. Use uplink to identify the PCI-ID of the SR-IOV-Passthrough device.

    Provisioning

    Provisioning performs the steps required to create an initial state on the system, including database and other changes. After performing provisioning, the vCenter is set up with a datacenter that has a host cluster, a distributed vSwitch, and a distributed port group.

    Run the following commands to provision all of the Contrail components, along with vCenter and the vcenter-plugin.

    cd /opt/contrail/utils
    
    fab  setup_all

    Verification

    When the provisioning completes, run the contrail-status command to view a health check of the Contrail configuration and control components. See the following example:

    contrail-status
    
    
    
    == Contrail Control ==
    
    supervisor-control:           active
    
    contrail-control              active             
    
    contrail-control-nodemgr      active             
    
    contrail-dns                  active             
    
    contrail-named                active             
    
    
    
    == Contrail Analytics ==
    
    supervisor-analytics:         active
    
    contrail-analytics-api        active             
    
    contrail-analytics-nodemgr    active             
    
    contrail-collector            active             
    
    contrail-query-engine         active             
    
    
    
    == Contrail Config ==
    
    supervisor-config:            active
    
    contrail-api:0                active             
    
    contrail-config-nodemgr       active              
    
    contrail-device-manager       active             
    
    contrail-discovery:0          active             
    
    contrail-schema               active             
    
    contrail-svc-monitor          active             
    
    contrail-vcenter-plugin       active              
    
    ifmap                         active             
    
    
    
    == Contrail Database ==
    
    supervisor-database:          active
    
    contrail-database             active             
    
    
    
    == Contrail Support Services ==
    
    supervisor-support-service:   active
    
    rabbitmq-server               active   
    
    

    Check the vRouter status by logging in to the ContrailVM and running contrail-status. The following is a sample of the output.

    == Contrail vRouter ==
    
    supervisor-vrouter:           active
    
    contrail-vrouter-agent        active
    
    contrail-vrouter-nodemgr      active

    Add ESXi Host to vCenter Cluster

    It is possible to provision and add an ESXi host to an existing vCenter cluster.

    Add an ESXi host by using the following commands These commands spawn the compute virtual machine on an ESXi host, install and set up Contrail roles, and add the ESXi host to the vCenter cluster and the distributed switch.

    -> fab prov_esxi:esxi_host (as specified in the esxi_hosts{} stanza in testbed.py)
    
    -> fab install_pkg_node:<contrail-deb>,root@ContrailVM-ip (as specified by ‘host’ in the contrail_vm stanza of testbed.py)
    
    -> fab add_esxi_to_vcenter:esxi_host
    
    -> fab add_vrouter_node:root@ContrailVM-ip

    Deployment Scenarios

    vCenter Blocks in the testbed.py File

    Populate the testbed.py file with the new stanzas (env.vcenter and env.esxi_hosts).

    The env.vcenter stanza contains information regarding the vCenter server, the datacenter within the vCenter, and the clusters present underneath it.

    Contrail requires a DVSwitch key and the corresponding trunk port-group to be configured inside of the datacenter for internal use. So that information is also provided in the vcenter environment section.

    The esxi_hosts stanza enumerates information regarding all esxi hosts present within the datacenter. For each host, in addition to the access information (ip, password, datastore etc.) for the host, it is required to provide the necessary attributes for its Contrail VM e.g. mac-address, ip address and the path to access VMDK, to use while creating the VM.

    The vmdk key is used to provide an absolute value of a local path (on the build or target machine where the install is being run) of the VMDK.

    Alternately, use the vmdk_download_path key for a remote path on the server somewhere.

    Parameterizing the Underlay Connection for Contrail VM

    Contrail VM has two networks. The underlay network is used to take traffic in and out of the host and the trunk port group network on the internal DVSwitch (created by specifying the dv_switch key inside of the env.vcenter stanza) used to talk to the tenant VMs on the host.

    You can create the underlay connection to the Contrail VM in following ways:

    1. Through the standard switch. This is the default. It creates a standard switch with the name vSwitch0 and a port group of contrail-fab-pg. This connection can also be stitched through any other standard switch by explicitly specifying fabric_vswitch and fabric_port_group values under the esxi-hosts stanza.
    2. Through Distributed Virtual Switch possibly with LAG. If LAG support is needed for an underlay connection into the Contrail VM, use the DVSwitch key. Add the dv_switch_fab and dv_port_group_fab values under the env.vcenter section. Note that this is a vCenter level resource hence it needs to be done at that level.
    3. Passthrough NIC. Use the uplink specification inside of the Contrail VM with the ID of the NIC that has been configured (a-priori) as passthrough.
    4. SRIOV. Use the uplink specification inside of the Contrail VM with the ID of the NIC that has been configured (a-priori) as SR-IOV.

    Federation Between VMware and KVM Using Two Contrail Controllers

    Note the following considerations:

    • To ensure that the two controllers become BGP peers, configure the same value for the router_asn value before running the BGP peer provisioning
    • When creating networks, use the same value for the route target so that BGP can copy the routes from a network in one controller instance to another.

    Sample Testbed.py for Contrail vCenter

    The following sample is output from a from fabric.api import env:

    #Management ip addresses of hosts in the cluster
    
    host1 = 'host@<ip address>'
    
    host2 = 'host@<ip address>'
    
    host3 = 'host@<ip address>'
    
    host4 = 'host@<ip address>'
    
    
    
    #External routers if any
    
    #for eg.
    
    #ext_routers = [('mx1', '<ip address>')]
    
    ext_routers = []
    
    
    
    #Autonomous system number
    
    router_asn = <asn>
    
    
    
    #Host from which the fab commands are triggered to install and provision
    
    host_build ='host@<ip address>'
    
    
    
    env.orchestrator = 'vcenter'
    
    #Role definition of the hosts.
    
    env.roledefs = {
    
        'all': [host1, host2, host3, host4],
    
        'cfgm': [host1, host2, host3],
    
        'control': [host1, host2, host3],
    
        'compute': [host4],
    
        'collector': [host1, host2, host3],
    
        'webui': [host1],
    
        'database': [host1, host2, host3],
    
        'build': [host_build],
    
        'storage-master': [host1],
    
        'storage-compute': [host4],
    
        # 'vgw': [host4, host5], # Optional, Only to enable VGW. Only compute can support vgw
    
        #   'backup':[backup_node],  # only if the backup_node is defined
    
    }
    
    
    
    env.hostnames = {
    
        'all': ['a0s1', 'a0s2', 'a0s3','a0s4', 'a0s5', 'a0s6', 'a0s7', 'a0s8', 'a0s9', 'a0s10','backup_node']
    
    }
    
    
    
    env.password = '<password>'
    
    #Passwords of each host
    
    env.passwords = {
    
        host1: '<password>',
    
        host2: '<password>',
    
        host3: '<password>',
    
        host4: '<password>',
    
        #  backup_node: 'secret',
    
        host_build: '<password>',
    
    }
    
    
    
    #For reimage purpose
    
    env.ostypes = {
    
        host1: 'ubuntu',
    
        host2: 'ubuntu',
    
        host3: 'ubuntu',
    
        host4: 'ubuntu',
    
    }
    
    
    
    #######################################
    
    #vcenter provisioning
    
    #server is the vcenter server ip
    
    #port is the port on which vcenter is listening for connection
    
    #username is the vcenter username credentials
    
    #password is the vcenter password credentials
    
    #auth is the authentication type used to talk to vcenter, http or https
    
    #datacenter is the datacenter name we are operating on
    
    #cluster is the clustername we are operating on
    
    #ipfabricpg is the ip fabric port group name
    
    #       if unspecified, the default port group name is contrail-fab-pg
    
    #dv_switch_fab selection contains distributed switch related params for fab network
    
    #      dv_switch_name
    
    #dv_port_group_fab selection contains the distributed port group info for fab network
    
    #       dv_portgroup_name and the number of ports the group has
    
    #dv_switch section contains distributed switch related params
    
    #       dv_switch_name
    
    #dv_port_group section contains the distributed port group info
    
    #       dv_portgroupname and the number of ports the group has
    
    ######################################
    
    env.vcenter = {
    
            'server':'<ip address>',
    
            'port': '443',
    
            'username': 'administrator@vsphere.local',
    
            'password': '<passsword>!',
    
            'auth': 'https',
    
            'datacenter': 'kd_dc',
    
            'cluster': 'kd_cluster',
    
            ‘ipfabricpg’: kd_ipfabric_pg,
    
            'dv_switch_fab': { 'dv_switch_name': 'dvs-lag' },
    
            'dv_port_group_fab': {
    
                         'dv_portgroup_name': 'contrail-fab-pg',
    
                         'number_of_ports': '3',
    
                         'uplink': 'lag1',
    
            }
    
            'dv_switch': { 'dv_switch_name': 'kd_dvswitch',
    
                         },
    
            'dv_port_group': {
    
                         'dv_portgroup_name': 'kd_dvportgroup',
    
                               'number_of_ports': '3',
    
                         },
    
    }
    
    #######################################
    
    # The compute vm provisioning on ESXI host
    
    # This section is used to copy a vmdk on to the ESXI box and bring it up
    
    # the contrailVM which comes up will be setup as a compute node with only
    
    # vrouter running on it. Each host has an associated esxi to it.
    
    #
    
    # esxi_host information:
    
    #    ip: the esxi ip on which the contrailvm(host/compute) runs
    
    #    username: username used to login to esxi
    
    #    password: password for esxi
    
    #    fabric_vswitch: the name of the underlay vswitch that runs on esxi
    
    #                    optional, defaults to 'vswitch0'
    
    #    fabric_port_group: the name of the underlay port group for esxi
    
    #                       optional, defaults to contrail-fab-pg'
    
    #    uplinck_nic: the nic used for underlay
    
    #                 optional, defaults to None
    
    #    data_store: the datastore on esxi where the vmdk is copied to
    
    #    cluster: the cluster to which this esxi needs to be added on to
    
    #    contrail_vm information:
    
    #        uplink: The SRIOV or Passthrough PCI Id(04:10.1). If not provided
    
    #                will default to vmxnet3 based fabric uplink
    
    #        mac: the virtual mac address for the contrail vm
    
    #        host: the contrail_vm ip in the form of 'user@contrailvm_ip'
    
    #        vmdk: the absolute path of the contrail-vmdk used to spawn vm
    
    #              optional, if vmdk_download_path is specified
    
    #        vmdk_download_path: download path of the contrail-vmdk.vmdk used to spawn vm 
    
    #                            optional, if vmdk is specified
    
    #        deb: absolute path of the contrail package to be installed on contrailvm
    
    #             optional, if contrail package is specified in command line
    
    ##############################################
    
    #esxi_hosts = {
    
    #       'esxi': {
    
    #             'ip': '1.1.1.1',
    
    #             'username': '<name>',
    
    #             'password': '<password>',
    
    #              ‘cluster': 'kd_cluster1',
    
    #             'datastore': "/vmfs/volumes/ds1",
    
    #             'contrail_vm': {
    
    #                   'mac': "00:50:56:05:ba:ba",
    
    #                  'uplink' : '04:10.1',
    
    #                   'host': "host@<ip address>",
    
    #                   'vmdk_download_path': "http://10.84.5.100/vmware/vmdk/ContrailVM-disk1.vmdk",
    
    #             }
    
    #       }
    
    #}
    
    

    User Interfaces for Configuring Features

    The Contrail integration with VMware vCenter provides two user interfaces for configuring and managing features for this type of Contrail system.

    Refer to Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage the Network

    Modified: 2016-06-09