Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Provisioning VMs on Contrail Service Orchestration Servers

 

Virtual Machines (VMs) on the Contrail Service Orchestration (CSO) servers host the infrastructure services and some components.

Note

If you use the KVM hypervisor while installing a Distributed CPE (Hybrid WAN) or an SD-WAN solution, you must create a bridge interface on the physical server. The bridge interface should map the primary network interface (Ethernet management interface) on each CSO server to a virtual interface before you create VMs. This action enables the VMs to communicate with the network.

Assumptions/Prerequisites:

  • Network machines (routers) are configured with required configurations.

  • All the physical servers where KVM VMs are provisioned should have Ubuntu 16.04.5 LTS.

  • All the VMs where CSP components are deployed should have Ubuntu 16.04.5 LTS OS.

  • See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each deployment.

  • Verify the DNS server configuration on the servers.

  • All the machines have SSH enabled.

  • All the VMs are on the same subnet.

  • All the machines are reachable between each other.

  • All the operations and installations are to be run as root user.

  • Verify all the machines have the correct FQDN.

  • For CSO release 5.1.0 installation, verify you have internet access.

Before You Begin

Note

CSO Release 5.1.0 supports only the KVM hypervisor, whereas CSO Release 5.1.1 supports KVM as well ESXi version 6.7 hypervisors.

Before you begin you must:

  • Configure the physical servers.

  • The VMs must match the server requirement as given in Minimum Requirements for Servers and VMs.

    CSO VMs of each type must be distributed across different servers in different racks to avoid server or TOR switch failure. It is recommended to use 3 servers.

  • Install Ubuntu 16.04.5 LTS as the operating system for the physical servers.

Creating a Bridge Interface for KVM

If you use the KVM hypervisor, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO server to a virtual interface before you create the VMs. This action enables the VMs to communicate with the network.

To create the bridge interface:

  1. Log in as root on the CSO server.
  2. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
    root@host:~/# ifconfig
  3. Set up the KVM host.
    * apt-get update
    * apt-get install libvirt-bin
  4. Modify the /etc/network/interfaces file to map the primary network interface to the virtual interface br0. Note

    Note: You must perform this step on all the servers in the HA deployment. Address of eno2 must be changed.

    For example, use the following configuration to map the primary interface eno2 to the virtual interface br0:

  5. Navigate to the untar CSO location on one of the servers and run the following commands:
    root@ccra-68:~/Contrail_Service_Orchestration_5.1.0/ci_cd# ls -ltr setup_bms.sh

    -rwxr-xr-x 1 root root 716 Oct 10 01:57 setup_bms.sh

    root@ccra-68:~/Contrail_Service_Orchestration_5.1.0/ci_cd# ./setup_bms.sh

    Run the script on all the servers.

Downloading the Installer for KVM Hypervisor

To provision the VMs:

  1. Log in as root to the CSO server.

    When you log in as root on the CSO server, you are placed in the home directory, /root.

  2. Download the appropriate installer package from the CSO Downloads page.

    Use the Contrail Service Orchestration installer if you have purchased licenses for both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

  3. Expand the installer package, which has a name specific to its contents and the release. For example, if the name of the installer package is cso<version>.tar.gz:
    root@host:~/# tar –xvzf cso<version>.tar.gz

    The expanded package is a directory that has the same name as the installer package and contains the installation files.

  4. Run the deploy.sh command and use an interactive script to create configuration files for the environment topology.

    Example output for HA deployment on KVM hypervisor—

    root@host:~/Contrail_Service_Orchestration_5.1.0./ deploy.sh

For KVM hypervisor, in case of standalone deployment, run the setup_NAT_rule.sh script on the BMS. For details, refer to Applying NAT Rules.

Downloading the Installer for ESXi Hypervisor

To provision the VMs:

  1. Log in to one of the CSO BMS servers as root.
  2. Download the appropriate installer package from the CSO Downloads page.

    Use the Contrail Service Orchestration installer if you have purchased licenses for both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

  3. Expand the installer package, which has a name specific to its contents and the release. For example, if the name of the installer package is cso<version>.tar.gz:
    root@host:~/# tar –xvzf cso<version>.tar.gz

    The expanded package is a directory that has the same name as the installer package and contains the installation files.

    The package contains ubuntu-16.04-server-cloudimg-amd64.ova and junos-vrr-x86-64-15.1F6-S7.2.ova files.

  4. Provision the VMs using ubuntu-16.04-server-cloudimg-amd64.ova file except VRR.

    The VMs must match the server requirement as given in Minimum Requirements for Servers and VMs.Note

    You must set the default-password parameter at the time of OVA upload while spawning the ubuntu VM. Once the VM is up, you can login with the password configured.

    The default username is ubuntu.

  5. Provision the VRRs using junos-vrr-x86-64-15.1F6-S7.2.ova file.

After your provision the VMs:

  • Assign an IP address to logical interface, ens192 associated with the VM.

  • Configure VMs with a valid hostname and update the /etc/hosts file.

    Note

    The hostnames must start and end with an alphanumeric character. The hostnames can contain only these special characters—hyphen (-) or period (.).

  • Enable netconf for VRRs.

  • Configure SSH to allow root access to all the VMs.

  • Reboot the VMs.

Verifying Connectivity of the VMs

From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.

Caution

If the VMs cannot communicate with all the other hosts in the deployment, the installation will fail.