Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Provision VMs on Contrail Service Orchestration Servers

 

Virtual machines (VMs) on the Contrail Service Orchestration (CSO) servers host the infrastructure services and some components.

Note

If you use a KVM hypervisor while installing an SD-WAN solution, you must create a bridge interface on the physical server. The bridge interface should map the primary network interface (Ethernet management interface) on each CSO server to a virtual interface before you create VMs. This bridge interface enables the VMs to communicate with the network.

Assumptions/Prerequisites:

  • Network devices (routers) must be configured with the required configurations.

  • All the physical servers where KVM VMs are provisioned must have Ubuntu 16.04.5 LTS installed.

  • All the VMs, except Contrail Analytics VMs, where CSO components are deployed must have Ubuntu 16.04.5 LTS OS installed.

  • All the Contrail Analytics VMs where CSO components are deployed must have CentOS version 7.7.1908 installed.

  • Ensure that the VMs and associated resources meet the requirements as given at Minimum Requirements for Servers and VMs.

  • You must have a DNS server with high availability for the on-premise Kubernetes cluster.

  • Verify the DNS server configuration on the servers.

  • All the VMs must have SSH enabled.

  • All the VMs must be on the same subnet.

  • All the VMs can reach one another.

  • All the operations and installations must be run as root user.

  • Verify that all the VMs have the correct Fully Qualified Domain Name (FQDN).

Before You Begin

Before you begin, you must:

  • Configure the physical servers.

  • Ensure that the VMs meet the server requirements listed in Minimum Requirements for Servers and VMs.

    Each type of the CSO VM must be distributed across different servers in different racks to avoid server or top-of-rack switch failure. We recommend that you use three servers.

  • Install Ubuntu 16.04.5 LTS as the operating system for the physical servers.

Create a Bridge Interface for KVM Hypervisors

If you use a KVM hypervisor, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO server to a virtual interface before you create the VMs. The bridge interface enables the VMs to communicate with the network.

To create a bridge interface:

  1. Log in as root user on the CSO server.
  2. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
    root@host:~/# ifconfig
  3. Set up the KVM host.
    * apt-get update
    * apt-get install libvirt-bin
    * apt-get install dnsutils
  4. Modify the /etc/network/interfaces file to map the primary network interface to the virtual interface (br0). Note

    You must perform this step on all the servers. Address of eno2 must be changed.

    For example, use the following configuration to map the primary interface eno2 to the virtual interface br0:

  5. Modify the main Apt sources configuration file on the new physical servers to connect the Debian sources.list to internet.
    root@host:~/# cp /etc/apt/orig-sources.list /etc/apt/sources.list

    You do not need to modify the file if Debian sources.list is connected to the Ubuntu repository.

  6. Navigate to the directory where the CSO .tar file has been downloaded on each of the servers and run the following scripts:
    root@host:~/Contrail_Service_Orchestration_6.0.0/ci_cd# ls -ltr setup_bms.sh

    -rwxr-xr-x 1 root root 716 Oct 10 01:57 setup_bms.sh

    root@host:~/Contrail_Service_Orchestration_6.0.0/ci_cd# ./setup_bms.sh

    You must run these scripts on all the servers.

    Verify that the libguestfs package is installed successfully.

    root@host:~/# dpkg -l |grep libguestfs-tools
    Note

    If you run the setup_bms.sh script after creating the bridge interface, you might see an error-device br0 already exists; can't create bridge with the same name. You can ignore the error message.

Download the Installer for KVM Hypervisor

To download the installer for KVM hypervisors and then provision the VMs:

  1. Log in as root user to the CSO server.
  2. Download the appropriate installer package from the CSO Downloads page.

    Use the Contrail Service Orchestration installer package if you have purchased Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

  3. Expand the installer package.
    root@host:~/# tar –xvzf cso<version>.tar.gz

    The expanded package is a directory that has the same name as the installer package and contains the installation files.

  4. Run the deploy.sh command. Use the interactive script to create configuration files for the environment specific topology.

    Example output for CSO deployment on KVM hypervisor—

    root@host:~/Contrail_Service_Orchestration_6.0.0./ deploy.sh
    Note

    You must note the automatically generated password that is displayed on the console because the password is not saved in the system.

Download the Installer for ESXi Hypervisor

To download the installer for ESXi hypervisors and then provision the VMs:

  1. Download the appropriate installer package from the CSO Downloads page on any of the servers.

    Use the Contrail Service Orchestration installer package if you have purchased Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

  2. Expand the installer package.
    root@host:~/# tar –xvzf cso<version>.tar.gz

    The expanded package contains ESXi-6.0.0.tgz under the /Artifacts folder.

    Extract the ESXi-6.0.0.tgz package.

    The ESXi-6.0.0.tgz package contains the ubuntu-16.04-server-cloudimg-amd64.ova file, the junos-vrr-x86-64-19.4R1.12.ova file, and the centos-77.ova file.

  3. Provision the VMs (except the VRR VMs) using the ubuntu-16.04-server-cloudimg-amd64.ova file.

    The VMs must match the server requirements specified in Minimum Requirements for Servers and VMs.

    The default username is root.

  4. Provision the VRR VMs using the junos-vrr-x86-64-19.4R1.12.ova file.

    Enable NETCONF for the VRR VMs.

    Base config example for VRR VM:

  5. Provision the contrail_analytics VMs using the centos-77.ova file.

    The default username is root.

After you provision the VMs:

  1. Assign an IP address to the logical interface (ens192) associated with each VM, except contrail_analytics VMs.

    For example:

  2. Assign an IP address to the logical interface (ens192) associated with the contrail_analytics VM.

    vi /etc/sysconfig/network-scripts/ifcfg-ens192
  3. Configure a valid hostname for all the VMs. and update the /etc/hostname file.

    Note

    The hostnames must start and end with an alphanumeric character. The hostnames can contain only the following special characters—hyphen (-) and period (.). The hostnames cannot contain uppercase letters.

  4. Update the /etc/hosts file on all the VMs.

    For example:127.0.1.1 <hostname>.example.net <hostname>

  5. Reboot all the VMs.

Verify Connectivity of the VMs

From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, and VMs in the CSO deployment.

Caution

If the VMs cannot communicate with all the other hosts in the deployment, the installation will fail.

Related Documentation