Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Provisioning VMs on Contrail Service Orchestration Nodes or Servers

 

Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO) nodes or servers host the infrastructure services and some other components. All servers and VMs for the solution should be in the same subnet. To set up the VMs, you can:

  • Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor or ESXi VMware on a CSO node or server.

    The tool also installs Ubuntu in the VMs.

  • Manually configure Virtual Route Reflector (VRR) VMs on a CSO node or server, if you use the ESXi VMware VM.

The VMs required on a CSO node or server depend on whether you configure:

  • A trial environment without high availability (HA). .

  • A production environment without HA.

  • A trial environment with HA.

  • A production environment with HA.

See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each environment.

The following sections describe the procedures for provisioning the VMs:

Before You Begin

Before you begin you must:

  • Configure the physical servers or node servers and nodes.

  • The operating system for physical servers must be Ubuntu 14.04.5 LTS.

  • For a centralized deployment, configure the Contrail Cloud Platform and install Contrail OpenStack.

Downloading the Installer

To download the installer package:

  1. Log in as root to the central CSO node or server.

    The current directory is the home directory.

  2. Download the appropriate installer package from https://www.juniper.net/support/downloads/?p=cso#sw.
    • Use the Contrail Service Orchestration installer if you purchased licenses for a centralized deployment or both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

      This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).

    • Use the Network Service Controller installer if you purchased only Network Service Controller licenses for a distributed deployment or SD-WAN implementation.

      This option includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools.

  3. Expand the installer package, which has a name specific to its contents and the release. For example, if the name of the installer package is csoVersion.tar.gz:
    root@host:~/# tar –xvzf csoVersion.tar.gz

    The expanded package is a directory that has the same name as the installer package and contains the installation files.

Creating a Bridge Interface for KVM

If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO node or server to a virtual interface. This action enables the VMs to communicate with the network.

To create the bridge interface:

  1. Log in as root on the central CSO node or server.
  2. Update the index files of the software packages installed on the server to reference the latest versions.
    root@host:~/# apt-get update
  3. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
    root@host:~/# ifconfig
  4. Install the libvirt software.
    root@host:~/# apt-get install libvirt-bin
  5. View the list of network interfaces, which now includes the virtual interface virbr0.
    root@host:~/# ifconfig

  6. Open the file /etc/network/interfaces and modify it to map the primary network interface to the virtual interface virbr0.

    For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:

  7. Modify the default virtual network by customizing the file default.xml:
    1. Customize the IP address and subnet mask to match the values for the virbr0 interface in the file /etc/network/interfaces
    2. Turn off the Spanning Tree Protocol (STP) option.
    3. Remove the NAT and DHCP configurations.

    For example:

    root@host:~/# virsh net-edit default

    Before modification:

    After modification:

  8. Reboot the physical machine and log in as root again.
  9. Verify that the primary network interface is mapped to the virbr0 interface.
    root@host:~/# brctl show

See also

Creating a Data Interface for a Distributed Deployment

For a distributed deployment, you create a second bridge interface that the VMs use to send data communications to the CPE device.

To create a data interface:

  1. Log into the central CSO server as root.
  2. Configure the new virtual interface and map it to a physical interface.

    For example:

    root@host:~/# virsh brctl addbr ex: virbr1
    root@host:~/# virsh brctl addif virbr1 eth1
  3. Create an xml file with the name virbr1.xml in the directory /var/lib/libvirt/network.
  4. Paste the following content into the virbr1.xml file, and edit the file to match the actual settings for your interface.

    For example:

  5. Open the /etc/network/interfaces file and add the details for the second interface.

    For example:

  6. Reboot the server.
  7. Verify that the secondary network interface, eth1, is mapped to the second interface.
    root@host:~/# brctl show
  8. Configure the IP address for the interface.

    You do not specify an IP address for the data interface when you create it.

Customizing the Configuration File for the Provisioning Tool

The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.

To customize the configuration file:

  1. Log in as root to the central CSO node or server.
  2. Access the confs directory that contains the example configuration files. For example, if the name of the installer directory is csoVersion
    root@host:~/# cd csoVersion/confs
  3. Access the directory for the environment that you want to configure.

    Table 1 shows the directories that contain the example configuration file.

    Table 1: Location of Configuration Files for Provisioning VMs

    Environment

    Directory for Example Configuration File

    Trial environment without HA

    cso3.3/trial/nonha/provisionvm

    Production environment without HA

    cso3.3/production/nonha/provisionvm

    Trial environment with HA

    cso3.3/trial/ha/provisionvm

    Production environment with HA

    cso3.3/production/ha/provisionvm

  4. Make a copy of the example configuration file in the /confs directory and name it provision_vm.conf.

    For example:

    root@host:~/cspVersion/confs# cp /cso3.3/trial/nonha/provisionvm/provision_vm_example.conf provision_vm.conf
  5. Open the file provision_vm.conf with a text editor.
  6. In the [TARGETS] section, specify the following values for the network on which CSO resides.
    • installer_ip—IP address of the management interface of the host on which you deployed the installer.

    • ntp_servers—Comma-separated list of fully qualified domain names (FQDN) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.

    • physical—Comma-separated list of hostnames of the CSO nodes or servers.

    • virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers.

  7. Specify the following configuration values for each CSO node or server that you specified in Step 6.
    • [hostname]—Hostname of the CSO node or server

    • management_address—IP address of the Ethernet management (primary) interface in classless Internet domain routing (CIDR) notation

    • management_interface—Name of the Ethernet management interface, virbr0

    • gateway—IP address of the gateway for the host

    • dns_search—Domain for DNS operations

    • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network

    • hostname—Hostname of the node

    • username—Username for logging in to the node

    • password—Password for logging in to the node

    • data_interface—Name of the data interface. Leave blank for a centralized deployment and specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.

  8. Except for the Junos Space Virtual Appliance and VRR VMs, specify configuration values for each VM that you specified in Step 6.
    • [VM name]—Name of the VM

    • management_address—IP address of the Ethernet management interface in CIDR notation

    • hostname—Fully qualified domain name (FQDN) of the VM

    • username—Login name of user who can manage all VMs

    • password—Password for user who can manage all VMs

    • local_user—Login name of user who can manage this VM

    • local_password—Password for user who can manage this VM

    • guest_os—Name of the operating system

    • host_server—Hostname of the CSO node or server

    • memory—Required amount of RAM in GB

    • vCPU—Required number of virtual central processing units (vCPUs)

    • enable_data_interface—True enables the VM to transmit data and false prevents the VM from transmitting data. The default is false.

  9. For the Junos Space VM, specify configuration values for each VM that you specified in Step 6.
    • [VM name]—Name of the VM.

    • management_address—IP address of the Ethernet management interface in CIDR notation.

    • web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)

    • gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the CSO node or server that hosts the VM.

    • nameserver_address—IP address of the DNS nameserver.

    • hostname—FQDN of the VM.

    • username—Username for logging in to Junos Space.

    • password—Default password for logging in to Junos Space.

    • newpassword—Password that you provide when you configure the Junos Space appliance.

    • guest_os—Name of the operating system.

    • host_server—Hostname of the CSO node or server.

    • memory—Required amount of RAM in GB.

    • vCPU—Required number of virtual central processing units (vCPUs).

  10. Save the file.
  11. Run the following command to start virtual machines.
    root@host:~/# ./provision_vm.sh

The following examples show customized configuration files for the different deployments:

Sample Configuration File for Provisioning VMs in a Trial Environment without HA

Sample Configuration File for Provisioning VMs in a Production Environment Without HA

Sample Configuration File for Provisioning VMs in a Trial Environment with HA

Sample Configuration File for Provisioning VMs in a Production Environment with HA

Provisioning VMs with the Provisioning Tool for the KVM Hypervisor

If you use the KVM hypervisor on the CSO node or server, you can use the provisioning tool to:

  • Create and configure the VMs for the CSO and Junos Space components.

  • Install the operating system in the VMs:

    • Ubuntu in the CSO VMs

    • Junos Space Network Management Platform software in the Junos Space VM

To provision VMs with the provisioning tool:

  1. Log in as root to the central CSO node or server.
  2. Access the directory for the installer. For example, if the name of the installer directory is csoVersion:
    root@host:~/# cd /~/csoVersion/
  3. Run the provisioning tool.
    root@host:~/cspVersion/# ./provision_vm.sh

    The provisioning begins.

  4. During installation, observe detailed messages in the log files about the provisioning of the VMs.
    • provision_vm.log—Contains details about the provisioning process

    • provision_vm_console.log—Contains details about the VMs

    • provision_vm_error.log—Contains details about errors that occur during provisioning

    For example:

    root@host:~/cspVersion/# cd logs
    root@host:/cspVersion/logs/# tail -f LOGNAME

Provisioning VMware ESXi VMs Using the Provisioning Tool

If you use the VMware ESXi (Version 6.0) VMs on the CSO node or server, you can use the provisioning tool—that is, provision_vm_ESXI.sh—to create and configure VMs for CSO.

Note

You cannot provision a Virtual Route Reflector (VRR) VM using the provisioning tool. You must provision the VRR VM manually.

Before you begin, ensure that the maximum supported file size for datastore in a VMware ESXi is greater than 512 MB. To view the maximum supported file size in datastore, you can establish an SSH session to the ESXi host and run the vmfkstools -P datastorePath command.

To provision VMware ESXi VMs using the provisioning tool:

  1. Download the CSO Release 3.3 installer package from the Software Downloads page to the local drive.
  2. Log in as root to the Ubuntu VM with the kernel version 4.4.0-31-generic, and has access to the internet. The VM must have the following specifications:
    • 8 GB RAM

    • 2 vCPUs

  3. Copy the installer package from your local drive to the VM.
    root@host:~/# scp Contrail_Service_Orchestration_3.3.tar.gz root@VM :/root
  4. On the VM, extract the installer package.

    For example, if the name of the installer package is Contrail_Service_Orchestration_3.3.tar.gz,

    root@host:~/# tar –xvzf Contrail_Service_Orchestration_3.3.tar.gz

    The contents of the installer package are extracted in a directory with the same name as the installer package.

  5. Navigate to the confs directory in the VM.

    For example:

    root@host:~/# cd Contrail_Service_Orchestration_3.3/confs
  6. Make a copy of the example configuration file, provision_vm_example_ESXI.conf, that is available in the confs directory and rename it provision_vm_ESXI.conf.

    For example:

    root@host:~/Contrail_Service_Orchestration_3.3/confs# cp /cso3.3/trial/nonha/provisionvm/provision_vm_example_ESXI.conf provision_vm.conf
  7. Open the provision_vm.conf file with a text editor.
  8. In the [TARGETS] section, specify the following values for the network on which CSO resides.
    • installer_ip—IP address of the management interface of the VM on which you are running the provisioning script.

    • ntp_servers—Comma-separated list of fully qualified domain names (FQDN) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.

    You need not edit the following values:

    • physical—Comma-separated list of hostnames of the CSO nodes or servers are displayed.

    • virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers are displayed.

  9. Specify the following configuration values for each ESXI host on the CSO node or server.
    • management_address—IP address of the Ethernet management (primary) interface in classless Internet domain routing (CIDR) notation of the VM network. For example, 192.168.1.2/24.

    • gateway—Gateway IP address of the VM network

    • dns_search—Domain for DNS operations

    • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network

    • hostname—Hostname of the VMware ESXi host

    • username—Username for logging in to the VMware ESXi host

    • password—Password for logging in to the VMware ESXi host

    • vmnetwork—Labels for each virtual network adapter. This label is used to identify the physical network that is associated to a virtual network adapter.

      The vmnetwork data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify vmnetwork data within double quotes.

    • datastore—Datastore value to save all VMs files.

      The datastore data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify datastore data within double quotes.

  10. Save the provision_vm.conf file.
  11. Run the provision_vm_ESXI.sh script to create the VMs.
    root@host:~/Contrail_Service_Orchestration_3.3/# ./provision_vm_ESXI.sh
  12. Copy provision_vm.conf file in the installer VM.

    For example:

    root@host:~/Contrail_Service_Orchestration_3.3/#scp confs/provision_vm.conf root@installer_VM_IP:/root/Contrail_Service_Orchestration_3.3/confs

This action brings up VMware ESXi VMs with the configuration provided in the files.

Manually Provisioning VRR VMs on the Contrail Service Orchestration Node or Server

You cannot use the provision tool—provision_vm_ESXI.sh—to provision the Virtual Route Reflector (VRR) VM. You must manually provision the VRR VM.

To manually provision the VRR VM:

  1. Download the VRR Release 15.1F6-S7 software package (.ova format) for VMware from the Virtual Route Reflector page, to a location accessible to the server.
  2. Launch the VRR using vSphere or vCenter Client for your ESXi server and log in to the server with your credentials.
  3. Set up an SSH session to the VRR VM.
  4. Execute the following commands:
    root@host:~/# configure
    root@host:~/# delete groups global system services ssh root-login deny-password
    root@host:~/# set system root-authentication plain-text-password
    root@host:~/# set system services ssh
    root@host:~/# set system services netconf ssh
    root@host:~/# set routing-options rib inet.3 static route 0.0.0.0/0 discard
    root@host:~/# commit
    root@host:~/# exit

Verifying Connectivity of the VMs

From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.

Caution

If the VMs cannot communicate with all the other hosts in the deployment, the installation can fail.