Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Configuring Contrail on VMware ESXi

    Introduction

    A Contrail cluster of nodes consists of one or more servers, each configured to provide certain role functionality for the cluster, including control, config, database, analytics, web-ui, and compute. The cluster provides virtualized network functionality in a cloud computing environment, such as OpenStack. Typically, the servers running Contrail components are using the Linux operating system and a KVM hypervisor.

    In Contrail Release 2.0 and later, limited capability is provided for extending the Contrail compute node functionality to servers running the VMware ESXi virtualization platform. To run Contrail on ESXi, a virtual machine is spawned on a physical ESXi server, and a compute node is configured on the virtual machine. For Contrail on ESXi, only compute node functionality is provided at this time.

    There are two methods for configuring and provisioning nodes in a Contrail cluster: using the Contrail Server Manager to automate provisioning or using fab (Fabric) commands. Both methods can also be used to configure Contrail compute nodes on VMWare ESXi.

    Using Server Manager to Configure Contrail Compute Nodes on VMware ESXi

    The following procedure provides guidelines and steps used to configure compute nodes on ESXi when using the Contrail Server Manager.

    Using Server Manager to provision nodes for ESXi is similar to the procedure for provisioning nodes on the KVM hypervisor. However, because an ESXi server is represented by a virtual machine for compute nodes in the Contrail environment, there are additional items to be identified in the setup. The following procedure describes how to configure the ESXi servers using Server Manager.

    For more details regarding the use and functionality of the Server Manager, refer to Using Server Manager to Automate Provisioning.

    Installation Guidelines Using Server Manager

    The following procedure provides guidelines for using the normal Server Manager installation process and applying it to an environment that includes compute nodes on ESXi server(s).

    1. Define the cluster and the cluster parameters. There are no additional cluster parameters needed for ESXi hosts.
    2. Define servers and configure them to be part of the cluster already defined. For a KVM-only environment, server entries are needed only for physical servers. However, when there is one or more ESXi servers in the cluster, in addition to a server entry for each physical ESXi server, there must also be an entry for a virtual machine on each ESXi server. The virtual machine is used to configure a Contrail compute node in OpenStack. Refer to the sample file following this procedure for examples of the additional entry and fields needed for ESXi servers and their virtual machines.
    3. Add images for Ubuntu and ESXi to the server manager database. These images are the base images for configuring the KVM and ESXi servers.
    4. Use the Server Manager add image command to add the Contrail package to be used to provision the Contrail nodes.
    5. In addition to the base OS (ESXi) for the ESXi server, also provide a modified Ubuntu Virtual Machine Disk (VMDK) image that is used to spawn the virtual machine on ESXi. The Contrail compute node functionality runs on the spawned virtual machine. The location of the VMDK image is provided as part of the parameters of the server entry that corresponds to the ESXi virtual machine.
    6. Add all additional configuration objects that are needed as described in the Server Manager installation instructions. See Using Server Manager to Automate Provisioning.
    7. When all configuration objects are created in the Server Manager database, use the reimage command, as described in Using Server Manager to Automate Provisioning, to boot the Ubuntu and ESXi hosts.
    8. Use the Server Manager provision command to provision the nodes on Ubuntu and ESXi hosts. During provisioning, the virtual machine is spawned on the ESXi server, using the VMDK image, then that node is configured as a compute node.
    9. When provisioning is complete, on an Ubuntu machine, all the nodes are up and operational. On the ESXi server, only compute node functionality is supported, consequently, only the virtual machine is seen as one of the compute nodes within the OpenStack cluster.

    Example: json Files for Configuring with Server Manager

    The following example shows sample configuration parameters needed to configure ESXi hosts along with Ubuntu servers. Only compute node functionality is supported for ESXi.

    1. Create a cluster.json file for the cluster configuration. The following shows sample parameters to include.
      "cluster" : [
              {
      
                  "id" : "clusteresx",
      
                  "email" : "test@testco.net",
      
                  "parameters" : {
      
                      "router_asn": "64512",
      
                      "database_dir": "/home/cassandra",
      
                      "db_initial_token": "",
      
                      "openstack_mgmt_ip": "",
      
                      "use_certs": "False",
      
                      "multi_tenancy": "False",
      
                      "encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
      
                      "service_token": "contrail123",
      
                      "keystone_user": "admin",
      
                      "keystone_passwd": "contrail123",
      
                      "keystone_tenant": "admin",
      
                      "openstack_password": "contrail123",
      
                      "analytics_data_ttl": "168",
      
                      "compute_non_mgmt_ip": "",
      
                      "compute_non_mgmt_gway": "",
      
                      "haproxy": "disable",
      
                      "subnet_mask": "255.255.255.0",
      
                      "gateway": "10.204.217.254",
      
                      "password": "c0ntrail123",
      
                      "external_bgp": "",
      
                      "domain": "englab.juniper.net"
      
                      }
      
              }
      
          }
      
      }
    2. Add the cluster using the cluster.json file just created:

      server-manager add cluster –f cluster.json

    3. Create a server.json file for the server configuration.

      The following example shows sample parameters to include. ESXi-specific parameters are highlighted. Note that the Ubuntu server is configured to have all Contrail role definitions, however, the ESXi virtual machine is configured for a compute role only:

       "server": [
      
             {
      
                  "id": "nodea10",
      
                  "mac_address": "00:25:90:A5:3B:1A",
      
                  "ip_address": "10.204.216.48",
      
                  "parameters" : {
      
                      "interface_name": "eth1"
      
                  },
      
                  "roles" : ["config","openstack","control","compute","collector","webui","database"],
      
                  "cluster_id": "clusteresx",
      
                  "subnet_mask": "255.255.255.0",
      
                  "gateway": "10.204.216.254",
      
                  "password": "c0ntrail123",
      
                  "domain": "englab.juniper.net",
      
                  "ipmi_address": "10.207.25.23"
      
              },
      
               {
      
                  "id": "nodeh6",
      
                  "mac_address": "00:25:90:C8:F3:1C",
      
                  "ip_address": "10.204.217.110",
      
                  "parameters": {
      
                      "interface_name": "eth0",
      
                      "server_license": "",
      
                      "esx_nicname": "vmnic0"
      
                  },
      
                 "roles": [
      
       
      
                  ],
      
       
      
                  "cluster_id": "clusteresx",
      
                  "subnet_mask": "255.255.255.0",
      
                  "gateway": "10.204.217.254",
      
                  "password": "c0ntrail123",
      
                  "ipmi_address": "10.207.25.164",
      
                  "domain": "englab.juniper.net"
      
              },
      
              {
      
                  "id": "ContrailVM",
      
                  "host_name": "ContrailVM", <<<<<<<<< Provide a hostname for VM, otherwise the hostname "nodeb2-contrail-vm" is created and hardcoded.
      
                  "mac_address": "00:50:56:01:ba:ba",<<<<<<<< The mac_address should be in the range 00:50:56:*:*:*
      
                  "ip_address": "10.204.217.209",
      
                  "parameters": {
      
                      "interface_name": "eth0",
      
                      "esx_server": "nodeh6",
      
                      "esx_uplink_nic": "vmnic0",
      
                      "esx_fab_vswitch": "vSwitch0",
      
                      "esx_vm_vswitch": "vSwitch1",
      
                      "esx_fab_port_group": "contrail-fab-pg",
      
                      "esx_vm_port_group": "contrail-vm-pg",
      
                      "esx_vmdk": "/home/smgr_files/json/ContrailVM-disk1.vmdk",
      
                      "vm_deb": "/home/smgr_files/json/contrail-install-packages_1.10-34~havana_all.deb"
      
                  },
      
                  "roles": [
      
                      "compute"
      
                  ],
      
                  "cluster_id": "clusteresx",
      
                  "subnet_mask": "255.255.255.0",
      
                  "gateway": "10.204.217.254",
      
                  "password": "c0ntrail123",
      
                  "domain": "englab.juniper.net"
      
              }
      
          ]
      
      }
    4. Add the servers using the server.json file:

      server-manager add server –f server.json

    5. Create the image.json file to add the needed images (Ubuntu, ESXi, and Contrail Ubuntu package) to the Server Manager database.

      The following sample shows the parameters to include.

      {
      
            "image": [
      
               {
      
                  "id": "esx",
      
                  "type": "esxi5.5",
      
                  "version": "5.5",
      
                  "path": "/home/smgr_files/json/esx5.5_x86_64.iso"
      
               },
               {
      
                  "id": "Ubuntu-12.04.3",
      
                  "type": "ubuntu",
      
                  "version": "12.04.3",
      
                  "path": "/home/smgr_files/json/Ubuntu-12.04.3-server-amd64.iso"
      
               },
      
               {
      
                  "id": "esx",
      
                  "type": "esxi5.5",
      
                  "version": "5.5",
      
                  "path": "/home/smgr_files/json/esx5.5_x86_64.iso"
      
               }
      
              ]
      
          }
    6. Add the images.

      server-manager add image –f image.json

    7. Reimage nodea10 with Ubuntu 12.04.3 and reimage nodeh6 with ESXi 5.5.

      server-manager reimage –server_id nodea10 Ubuntu-12.04.3

      server-manager reimage –server_id nodeh6 esx5-5

    8. Provision the roles to configure and provision all the roles on nodea10.

      server-manager provision –server_id nodea10 contrail-uh-r110-b34

    9. Ensure that the DHCP server in Server Manager is configured to provide an IP address to the virtual machine that is being spawned on ESXi as part of the next provisioning command. Modify the DHCP.template file on the Server Manager with a Cobbler machine to provide IP to VIRTUAL MACHINE with the virtual MAC address that is configured on that virtual machine.
    10. Provision the ESXi server to configure and provision the compute role on a virtual machine that is being created on the ESXi.

      server-manager provision –server_id ContrailVM contrail-uh-r110-b3

      Note: Provisioning on the ESXi consists of specifying the server ID corresponding to the virtual machine on the ESXi and not the ESXi server itself.

    Upon completion of provisioning, the two compute nodes can be seen in the OpenStack cluster. One of the compute nodes is on a physical Ubuntu server and the other compute node is on an ESXi virtual machine.

    The system is now ready for you to launch virtual machine instances and virtual networks and use them to communicate with each other and with external networks.

    Using Fab Commands to Configure Contrail Compute Nodes on VMware ESXi

    In Contrail Release 2.0 and later, you can use fab (Fabric) commands to configure the VMware ESXi hypervisor as a Contrail compute node. Refer to Contrail Installation and Provisioning Roles for details of using fab commands for installation.

    Requirements Before You Begin

    The guidelines for using fab commands to configure Contrail compute node on an ESXi server have the following prerequisites:

    • The testbed.py file must be populated with both ESXi hypervisor information and Contrail virtual machine information.
    • The ESXi hypervisor must be up and running with an appropriate ESXi version.

    Note: ESXi cannot be installed using fab commands in Contrail.

    Fab Installation Guidelines for ESXi

    Use the following guidelines when using fab commands to set up ESXi as a compute node for Contrail.

    1. Use the fab prov_esxi command to provision the ESXi with the required vswitches, port groups, and the contrail-compute-vm.

      The ESXi hypervisor information is provided in the esxi_hosts stanza, as shown in the following..

      #Following are ESXi Hypervisor details.
      
      esxi_hosts = {
      
      #Ip address of Hypervisor
      
       'esxi_host1' : {'ip': '10.204.216.35',
      
      #Username and password of ESXi Hypervisor
      
       'username': 'root',
      
       'password': 'c0ntrail123',
      
      #Uplink port of Hypervisor through which it is connected to external world
      
       'uplink_nic': 'vmnic2',
      
      #Vswitch on which above uplinc exists
      
       'fabric_vswitch' : 'vSwitch0',
      
      #Port group on 'fabric_vswitch' through which ContrailVM connects to external world
      
       'fabric_port_group' : 'contrail-fab-pg',
      
      #Vswitch name to which all openstack virtual machine's are hooked to
      
       'vm_vswitch': 'vSwitch1',
      
      #Port group on 'vm_vswitch', which is a member of all vlans, to which ContrailVM is connected to all openstack VM's
      
       'vm_port_group' : 'contrail-vm-pg',
      
      #links 'host2' ContrailVM to esxi_host1 hypervisor
      
      contrail_vm' : {
      
            'name' : 'ContrailVM2' # Name for the contrail-compute-vm,
            'mac' : '00:50:56:aa:ab:ac', # VM's eth0 mac address, same should be configured on DHCP server
      
            'host' : host2, # host string for VM, as specified in the env.rolesdef['compute']
      
      
             ’vmdk' : 'file.vmdk' # local path of the VMDK file
      
       },
      
      # Another ESXi hypervisor follows
      
      }

      Note: The VMDK for contrail-compute-vm (ESXi-v5.5-Contrail-host-Ubuntu-precise-12.04.3-LTS.vmdk) can be downloaded from https://www.juniper.net/support/downloads/?p=contrail#sw

      Note: The contrail-compute-vm gets the IP address and its host name from the DHCP server.

    2. The standard installation fab commands, such as fab install_pkg_all, fab install_contrail and fab setup_all are used to finish the setup of the entire cluster.

      For more information about using fab commands for installation, see:Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the Remaining Machines.

    Modified: 2017-04-06