Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Installing and Provisioning VMware vCenter with Contrail

    Overview: Integrating Contrail with vCenter Server

    This topic describes how to install and provision Contrail Release 2.20 and later so that it works with existing or already provisioned vSphere deployments that use VMware vCenter as the main orchestrator.

    The Contrail VMware vCenter solution is comprised of the following main components:

    • Control and management that runs the following components as needed per Contrail system:
      • A VMware vCenter Server independent installation that is not managed by Juniper Networks Contrail. The Contrail software provisions vCenter with Contrail components and creates entities required to run Contrail.
      • The Contrail controller, including the configuration nodes, control nodes, analytics, database, and Web UI, which are installed, provisioned, and managed by Contrail software.
      • A VMware vCenter plugin provided with Contrail.
    • VMware ESXi virtualization platforms forming the compute cluster, with Contrail data plane (vRouter) components running inside an Ubuntu-based virtual machine. The virtual machine, named ContrailVM, forms the compute personality while performing Contrail installs. The ContrailVM is set up and provisioned by Contrail. There is one ContrailVM running on each ESXi host.

    Different Modes of vCenter Integration with Contrail

    The vCenter integrated Contrail solution has the following modes:

    • vCenter-only
    • vCenter-as-compute

    vCenter-Only Mode

    In the vCenter-only mode, vCenter is the main orchestrator, and Contrail is integrated with vCenter for the virtual networking.

    Figure 1 shows the Contrail vCenter-only solution.

    Figure 1: Contrail vCenter-Only Solution

    Contrail vCenter-Only Solution

    vCenter-as-Compute Mode

    In the vCenter-as-compute mode, OpenStack is the main orchestrator, and the vCenter cluster, along with the managed ESXi hosts, act as a nova-compute node to the OpenStack orchestrator.

    Figure 2 shows the Contrail vCenter-as-compute solution.

    Figure 2: Contrail vCenter-as-Compute Solution

    Contrail vCenter-as-Compute Solution

    Preparing the Installation Environment

    Use the standard Contrail installation procedure to install Contrail on one of the target boxes or servers, so that Fabric (fab) scripts can be used to install and provision the entire cluster.

    Follow the steps in the Installing Contrail Packages for Ubuntu section in Installing the Contrail Packages, Part One (CentOS or Ubuntu)

    Note: The fab scripts require a file named testbed.py, that holds all of the key attributes for fab to begin provisioning, including the IP addresses of the Contrail roles. Refer to the sample testbed.py file for Contrail vCenter in Sample Testbed.py Files for Contrail vCenter.

    Installation for vCenter-Only Mode

    This section lists the basic installation procedure and the assumptions and prerequisites necessary before starting the installation of any VMware vCenter Contrail integration.

    Note: To ensure that you are using the correct versions of all software for your specific system, refer to the Supported Platforms section in the release notes for your release of Contrail.

    Installation: Assumptions and Prerequisites

    The following assumptions and prerequisites are required for a successful installation of a VMware vCenter Contrail integrated system:

    • VMware vCenter Server
    • A cluster of ESXi hosts with VMware
    • The following software installation packages:
      • The contrail-install-packages_x.x.x.x.~vcenter_all.deb package for Ubuntu 14.04
      • VMDK image of ContrailVM
    • Because a Contrail vRouter runs as a virtual machine on each ESXi host, it needs an IP address assigned from the same underlay network as the host, all of which must be specified appropriately in the testbed.py file. Refer to the section Underlay Network Configuration for ContrailVM for ContrailVM IP fabric connectivity.

    Installing the vCenter-Only Components

    Follow the steps in this section to install the Contrail for vCenter-only components. Refer to the sample testbed.py file for Contrail vCenter for specific examples: Sample Testbed.py Files for Contrail vCenter.

    1. Ensure that all information in the esxi_hosts section of the testbed.py file is accurate, then provision the ESXi hosts using the following command:

      fab prov_esxi

      The esxi_hosts = { } section of the testbed.py file spawns the ContrailVM from the bundled VMDK file.

      • Ensure that all required information in the section is specific to your environment and that the VMDK file can be accessed by the machine running the fab task.
      • If the IP address and the corresponding MAC address of the ContrailVM are statically mapped in the DHCP server, specify the static IP address in the host field and the MAC address in the mac field in the contrail_vm subsection.
      • ContrailVM IP fabric connectivity can be configured in various ways; refer to the section Underlay Network Configuration for ContrailVM for details.

      When finished, ping each of the ContrailVMs to make sure they are reachable.

    2. Set up vCenter.

      fab setup_vcenter

      Specify the orchestrator to be vCenter for proper provisioning of vCenter-related components, as in the following:

      env.orchestrator = 'vcenter'

      When finished, verify that you can see the ESXIs and ContrailVMs on the vCenter user interface, refer to Using the Contrail and VMWare vCenter User Interfaces to Manage the Network.

    3. Ensure that the Contrail Debian package is available on all the nodes.

      fab install_pkg_all:<Contrail deb package>

    4. Install the Contrail components into the desired roles on the specified nodes.

      fab install_contrail

      This also installs the vCenter plugin on the Contrail config nodes.

    5. Set up the management and control data interfaces. Perform this step ONLY if the management and control_data interfaces are separate.

      fab setup_interface_node

    6. Provision all of the Contrail components and the vCenter plugin.

      fab setup_all

      This step also creates the required configuration files on the system.

    Installation for vCenter-as-Compute Mode

    This section lists the basic installation procedure and the assumptions and prerequisites necessary before starting the installation of any VMware vCenter-as-compute Contrail integration.

    Note: To ensure you are using the correct versions of all software for your specific system, refer to the Supported Platforms section in the release notes for your release of Contrail.

    Installation: Assumptions and Prerequisites

    The following assumptions and prerequisites are required for a successful installation of a VMware vCenter Contrail integrated system:

    • VMware vCenter Server
    • A cluster of ESXi hosts with VMware
    • The following software installation packages:
      • The OpenStack contrail-install *.deb package for Ubuntu 14.04
      • VMDK image of ContrailVM
      • Releases up to Contrail 3.0 only: The contrail-install-vcenter-plugin *.deb package.
    • Because a Contrail vRouter runs as a virtual machine on each ESXi host, it needs an IP address assigned from the same underlay network as the host, all of which must be specified appropriately in the testbed.py file. Refer to Underlay Network Configuration for ContrailVM for ContrailVM IP fabric connectivity.

    For the vCenter-as-compute mode, an additional role of ‘vcenter-compute’ is required, specified as [‘vcenter_compute’] in the env.roledefs section of the testbed.py file. Nodes configured as vcenter_compute act as the nova-compute nodes in this mode.

    For specific examples, refer to the sample testbed.py file in Sample Testbed.py Files for Contrail vCenter.

    Installing the vCenter-as-Compute Components

    To install the vCenter-as-compute components:

    1. Provision the ESXi hosts.

      fab prov_esxi

      Before performing this, ensure that all information in the esxi_hosts section of the testbed.py file is accurate.

      The esxi_hosts = { } section of the testbed.py file spawns the ContrailVM from the bundled VMDK file.

      Ensure all required information in the section is specific to your environment and the VMDK file can be accessed by the machine running the fab task.

      If the IP address and the corresponding MAC address of the ContrailVM are statically mapped in the DHCP server, specify the static IP address in the host field and the MAC address in the mac field in the contrail_vm subsection.

      ContrailVM IP fabric connectivity can be configured in various ways, refer to Underlay Network Configuration for ContrailVM for details.

      When finished, ping each of the ContrailVMs to make sure they are reachable.

    2. Set up vCenter.

      fab setup_vcenter

      Specify the orchestrator to be OpenStack for proper provisioning of vCenter-as-compute, as in the following:

      env.orchestrator = 'openstack'

      In the vCenter-as-compute mode, you can have multiple vcenter_servers specified in the testbed.py.

      Refer to Sample Testbed.py Files for Contrail vCenter.

      When finished, verify that you can see the ESXIs and ContrailVMs on the vCenter user interface. Refer to Using the Contrail and VMWare vCenter User Interfaces to Manage the Network

    3. Ensure that the contrail-install.deb package is available on all nodes.

      fab install_pkg_all: <Contrail deb package>

    4. Install the Contrail components into the desired roles on the specified nodes.

      fab install_contrail

    5. Set up the management and control data interfaces. Run this step ONLY if the management and control_data interfaces are separate.

      fab setup_interface_node

    6. Provision all of the Contrail components and the vCenter plugin.

      fab setup_all

      This step also creates the required configuration files on the system.

    Verification

    When the provisioning step completes, run the contrail-status command on all nodes to view a health check of the Contrail configuration and control components.

    Adding Hosts or Nodes

    You can add some vCenter features to existing installations, including:

    • Adding an ESXi host
    • Adding a vCenter cluster

    Adding an ESXi Host to an Existing vCenter Cluster

    You can provision and add an ESXi host to an existing vCenter cluster.

    To add an ESXi host, use the following commands (which spawn the compute VM on an ESXi host) install and set up Contrail roles, and add the ESXi host to the vCenter cluster and switch:

    1. Spawn the ContrailVM.

      fab prov_esxi: <esxi_host>

      where <esxi_host> is the ESXi hostname as specified in the esxi_hosts{}.

    2. Install the Contrail *.deb package on the ContrailVM in the ESXi host.

      fab install_pkg_node:<contrail-deb>,root@ContrailVM-ip

    3. Add the ESXi to the vCenter cluster and put it into the switch, as specified in the configuration in the testbed.py file.

      fab add_esxi_to_vcenter:esxi_host

    4. Install and set up the Contrail vRouter on the ContrailVM in the ESXi host.

      fab add_vrouter_node:root@ContrailVM-ip

    Adding a vCenter Cluster to vCenter-as-Compute

    Use this procedure to add a vCenter cluster to a vCenter-as-compute system. Ensure that you have provisioned and added all of the ESXI hosts, as described in the previous Adding an ESXI Host to an Existing vCenter Cluster procedure.

    To set up and add a vCenter compute node:

    1. Install the Contrail *.deb package on the vCenter compute nodes.

      fab install_pkg_node:<contrail-deb>,root@<vcenter_compute-ip>

    2. For releases up to Contrail 3.0 only: Install the vcenter-plugin in the vcenter_compute node.

      fab install_contrail_vcenter_plugin:<vcenter-plugin-deb>,root@<vcenter_compute-ip>

    3. Provision the vcenter_compute node and set up Nova configuration files.

      fab add_vcenter_compute_node:root@vcenter_compute-ip

    Modified: 2017-10-10