Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Provisioning VMs on Contrail Service Orchestration Nodes or Servers

 

Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO) nodes or servers host the infrastructure services and some other components. All servers and VMs for the solution should be in the same subnet. To set up the VMs, you can:

Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor or VMware ESXi on a CSO node or server.

The VMs created by provisioning tool have Ubuntu preinstalled.

Note

If you use the KVM hypervisor while installing a Distributed CPE (Hybrid WAN) or an SD-WAN solution, you must create a bridge interface on the physical server. The bridge interface should map the primary network interface (Ethernet management interface) on each CSO server node or server to a virtual interface before you create VMs. This action enables the VMs to communicate with the network.

This approach is applicable only if you are installing a Distributed CPE (Hybrid WAN) or an SD-WAN solution. It is not required for a centralized solution.

The VMs required on a CSO node or server depend on whether you configure:

The small and medium deployments are always region-less deployment whereas the large deployment is always region-based deployment.

See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each deployment.

The following sections describe the procedures for provisioning the VMs:

Before You Begin

Before you begin you must:

  • Configure the physical servers or node servers and nodes.

  • Install Ubuntu 14.04.5 LTS as the operating system for the physical servers.

  • Configure the Contrail Cloud Platform and install Contrail OpenStack if you are performing a centralized CPE deployment.

Downloading the Installer

To download the installer package:

  1. Log in as root to the central CSO node or server.

    The current directory is the home directory.

  2. Download the appropriate installer package from the CSO Download page.
    • Use the Contrail Service Orchestration installer if you have purchased licenses for a centralized deployment or both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

      This installer includes all the Contrail Service Orchestration graphical user interfaces (GUIs).

    • Use the Network Service Controller installer if you have purchased only Network Service Controller licenses for a distributed deployment or SD-WAN implementation.

      This installer includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools.

  3. Expand the installer package, which has a name specific to its contents and the release. For example, if the name of the installer package is csoVersion.tar.gz:
    root@host:~/# tar –xvzf csoVersion.tar.gz

    The expanded package is a directory that has the same name as the installer package and contains the installation files.

Creating a Bridge Interface for KVM

If you use the KVM hypervisor, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO server node or server to a virtual interface before you create VMs. This action enables the VMs to communicate with the network.

A physical server or node needs Internet access to install the libvirt-bin package.

To create the bridge interface:

  1. Log in as root on the central CSO node or server.
  2. Update the index files of the software packages installed on the server to reference the latest versions.
    root@host:~/# apt-get update
  3. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
    root@host:~/# ifconfig
  4. Install the libvirt software.
    root@host:~/# apt-get install libvirt-bin
  5. View the list of network interfaces, which now includes the virtual interface virbr0.
    root@host:~/# ifconfig

  6. Open the /etc/network/interfaces file and modify it to map the primary network interface to the virtual interface virbr0.

    For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:

  7. Modify the default virtual network by customizing the file default.xml:
    1. Customize the IP address and subnet mask to match the values for the virbr0 interface in the /etc/network/interfaces file.
    2. Turn off the Spanning Tree Protocol (STP) option.
    3. Remove the NAT and DHCP configurations.

    For example:

    root@host:~/# virsh net-edit default

    Before modification:

    After modification:

  8. Reboot the physical machine and log in as root again.
  9. Verify that the primary network interface is mapped to the virbr0 interface.
    root@host:~/# brctl show

Creating a Data Interface for a Distributed Deployment

For a distributed deployment on KVM hypervisor, you create a second bridge interface that the VMs use to send data communications to the CPE device.

A physical server or node needs Internet access to install libvirt-bin package.

To create a data interface:

  1. Log in to the central CSO server as root.
  2. Configure the new virtual interface and map it to a physical interface.

    For example:

    root@host:~/# virsh brctl addbr virbr1
    root@host:~/# virsh brctl addif virbr1 eth1
  3. Create a file with the name virbr1.xml in the /var/lib/libvirt/networkdirectory.
  4. Paste the following content into the virbr1.xml file, and edit the file to match the actual settings for your interface.

    For example:

  5. Open the /etc/network/interfaces file and add the details for the second interface.

    For example:

  6. Reboot the server.
  7. Verify that the secondary network interface, eth1, is mapped to the second interface.
    root@host:~/# brctl show

Customizing the Configuration File for the Provisioning Tool

The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.

To customize the configuration file:

  1. Log in as root to the central CSO node or server.
  2. Access the confs directory that contains the sample configuration files. For example, if the name of the installer directory is csoVersion.
    root@host:~/# cd csoVersion/confs
  3. Access the directory for the environment that you want to configure.

    Table 1 shows the directories that contain the sample configuration file.

    Table 1: Location of Configuration Files for Provisioning VMs

    Deployment

    Directory for Sample Configuration File

    Small deployment

    confs/cso4.0.0/trial/nonha/provisionvm/provision_vm_collocated_example.conf

    Medium deployment

    confs/cso4.0.0/production/ha/provisionvm/provision_vm_collocated_example.conf

    Large deployment

    confs/cso4.0.0/production/ha/provisionvm/provision_vm_example.conf

  4. Make a copy of the sample configuration file in the /confs directory and name it provision_vm.conf.

    For example:

    root@host:~/cspVersion# cp /confs/cso4.0.0/trial/nonha/provisionvm/provision_vm_collocated_example.conf provision_vm.conf
  5. Open the provision_vm.conf file with a text editor.
  6. In the [TARGETS] section, specify the following values for the network on which CSO resides.
    • installer_ip—IP address of the management interface of the host on which you deployed the installer.

    • ntp_servers—Comma-separated list of fully qualified domain names (FQDNs) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.

    • physical—Comma-separated list of hostnames of the CSO nodes or servers.

    • virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers.

  7. Specify the following configuration values for each CSO node or server that you specified in Step 6.
    • [hostname]—Hostname of the CSO node or server

    • management_address—IP address of the Ethernet management (primary) interface in classless Interdomain routing (CIDR) notation

    • management_interface—Name of the Ethernet management interface, virbr0

    • gateway—IP address of the gateway for the host

    • dns_search—Domain for DNS operations

    • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network

    • hostname—Hostname of the node

    • username—Username for logging in to the node

    • password—Password for logging in to the node

    • data_interface—Name of the data interface. Leave blank for a centralized deployment. Specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.

  8. Specify configuration values for each VM that you specified in Step 6.
    • [VM name]—Name of the VM

    • management_address—IP address of the Ethernet management interface in CIDR notation

    • hostname—Fully qualified domain name (FQDN) of the VM

    • username—Login name of user who can manage all VMs

    • password—Password for user who can manage all VMs

    • local_user—Login name of user who can manage this VM

    • local_password—Password for user who can manage this VM

    • guest_os—Name of the operating system

    • host_server—Hostname of the CSO node or server

    • memory—Required amount of RAM in GB

    • vCPU—Required number of virtual central processing units (vCPUs)

    • enable_data_interface—True enables the VM to transmit data and false prevents the VM from transmitting data. The default is false.

  9. For the Junos Space VM, specify configuration values for each VM that you specified in Step 6.
    • [VM name]—Name of the VM.

    • management_address—IP address of the Ethernet management interface in CIDR notation.

    • web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)

    • gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the CSO node or server that hosts the VM.

    • nameserver_address—IP address of the DNS nameserver.

    • hostname—FQDN of the VM.

    • username—Username for logging in to Junos Space.

    • password—Default password for logging in to Junos Space.

    • newpassword—Password that you provide when you configure the Junos Space appliance.

    • guest_os—Name of the operating system.

    • host_server—Hostname of the CSO node or server.

    • memory—Required amount of RAM in GB.

    • vCPU—Required number of virtual central processing units (vCPUs).

    • vm_type—Preinstalled OSS with VM types of baseInfra/baseMS.

    • volumes—Data partitions of CSO (Eg: /mnt/data:400G.

    • base_disk_size—OS partitions of CSO (Eg: 100G).

  10. Save the file.
  11. Download the ESXi-4.0.0.tar.gz file to the /root/Contrail_Service_Orchestration_4.0.0/artifacts folder on the Ubuntu VM.
  12. Run the following command to start virtual machines.
    root@host:~/# ./provision_vm.sh

The following sections show examples of customized configuration files for for a small, a medium, and a large deployment.

Sample Configuration File for Provisioning VMs in a Small Deployment

Sample Configuration File for Provisioning VMs in a Medium Deployment

Sample Configuration File for Provisioning VMs in a Large Deployment

Provisioning VMs with the Provisioning Tool for the KVM Hypervisor

If you use the KVM hypervisor for the CSO node or server, you can use the provisioning tool to:

  • Create and configure the VMs for the CSO and Junos Space components.

The VMs provisioned by the tool will have pre-installed Ubuntu and Junos Space Network Management Platform software in the Junos Space VM

To provision VMs with the provisioning tool:

  1. Log in as root to the central CSO node or server.
  2. Access the directory for the installer. For example, if the name of the installer directory is csoVersion:
    root@host:~/# cd /~/csoVersion/
  3. Run the provisioning tool.
    root@host:~/cspVersion/# ./provision_vm.sh

    The provisioning begins.

  4. During installation, observe detailed messages in the log files about the provisioning of the VMs.
    • provision_vm.log—Contains details about the provisioning process

    • provision_vm_console.log—Contains details about the VMs

    • provision_vm_error.log—Contains details about errors that occur during provisioning

    For example:

    root@host:~/cspVersion/# cd logs
    root@host:/cspVersion/logs/# tail -f LOGNAME

Provisioning VMware ESXi VMs Using the Provisioning Tool

If you use the VMware ESXi (Version 6.0) VMs on the CSO node or server, you can use the provisioning tool—that is, provision_vm_ESXI.sh—to create and configure VMs for CSO.

Note

You cannot provision a Virtual Route Reflector (VRR) VM by using the provisioning tool. You must provision the VRR VM manually.

Before you begin, ensure that the maximum supported file size for a datastore in a VMware ESXi is greater than 512 MB. To view the maximum supported file size in datastore, you can establish an SSH session with the ESXi host and run the vmfkstools -P datastorePath command.

To provision VMware ESXi VMs using the provisioning tool:

  1. Download the CSO Release 4.0.0 installer package from the Software Downloads page to the local drive.
  2. Log in as root to the Ubuntu VM with Internet access and a kernel version 4.4.0-31-generic. The VM must have the following specifications:
    • 8 GB RAM

    • 2 vCPUs

  3. Copy the installer package from your local drive to the VM.
    root@host:~/# scp Contrail_Service_Orchestration_4.0.0.tar.gz root@VM :/root
  4. On the VM, extract the installer package.

    For example, if the name of the installer package is Contrail_Service_Orchestration_4.0.0.tar.gz,

    root@host:~/# tar –xvzf Contrail_Service_Orchestration_4.0.0.tar.gz:

    The contents of the installer package are extracted in a directory with the same name as the installer package.

  5. Navigate to the confs directory in the VM.

    For example:

    root@host:~/# cd Contrail_Service_Orchestration_4.0.0/confs
  6. Make a copy of the sample configuration file, provision_vm_example_ESXI.conf, that is available in the confs directory and rename it provision_vmI.conf.

    For example:

    root@host:~/Contrail_Service_Orchestration_4.0# cp /confs/cso4.0.0/production/nonha/provisionvm/provision_vm_collocated_ESXI.conf provision_vm.conf
  7. Open the provision_vm.conf file with a text editor.
  8. In the [TARGETS] section, specify the following values for the network on which CSO resides.
    • installer_ip—IP address of the management interface of the VM on which you are running the provisioning script.

    • ntp_servers—Comma-separated list of fully qualified domain names (FQDNs) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.

    You need not edit the following values:

    • physical—Comma-separated list of hostnames of the CSO nodes or servers are displayed.

    • virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers are displayed.

  9. Specify the following configuration values for each ESXi host on the CSO node or server.
    • management_address—IP address of the Ethernet management (primary) interface in Classless Interdomain Routing (CIDR) notation of the VM network. For example, 192.0.2.0/24.

    • gateway—Gateway IP address of the VM network

    • dns_search—Domain for DNS operations

    • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network

    • hostname—Hostname of the VMware ESXi host

    • username—Username for logging in to the VMware ESXi host

    • password—Password for logging in to the VMware ESXi host

    • vmnetwork—Label for each virtual network adapter. This label is used to identify the physical network that is associated to a virtual network adapter.

      The vmnetwork data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify vmnetwork data within double quotation marks..

    • datastore—Datastore value to save all VM files.

      The datastore data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify datastore data within double quotes.

  10. Save the provision_vm.conf file.
  11. Run the provision_vm_ESXI.sh script to create the VMs.
    root@host:~/Contrail_Service_Orchestration_4.0.0/# ./provision_vm_ESXI.sh
  12. Copy the provision_vm.conf file to the installer VM.

    For example:

    root@host:~/Contrail_Service_Orchestration_4.0.0/#scp confs/provision_vm.conf root@installer_VM_IP:/root/Contrail_Service_Orchestration_4.0.0/confs

This action of provisioning VMs using the Provisioning Tool brings up VMware ESXi VMs with the configuration provided in the files.

Manually Provisioning VRR VMs on the Contrail Service Orchestration Node or Server

To manually provision the VRR VM:

  1. Download the VRR Release 15.1R6.7 software package (.ova format) for VMware from the Virtual Route Reflector page, to a location accessible to the server.
  2. Launch the VRR by using vSphere or vCenter Client for your ESXi server and log in to the server with your credentials.
  3. Set up an SSH session to the VRR VM.
  4. Execute the following commands:
    root@host:~/# configure
    root@host:~/# delete groups global system services ssh root-login deny-password
    root@host:~/# set system root-authentication plain-text-password
    root@host:~/# New Password:<password>
    root@host:~/# Retype New Password:<password>
    root@host:~/# set system services ssh
    root@host:~/# set system services netconf ssh
    root@host:~/# set routing-options rib inet.3 static route 0.0.0.0/0 discard
    root@host:~/# commit
    root@host:~/# exit

Verifying Connectivity of the VMs

From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.

Caution

If the VMs cannot communicate with all the other hosts in the deployment, the installation will fail.