Provisioning VMs on Contrail Service Orchestration Nodes or Servers
Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO) nodes or servers host the infrastructure services and some other components. All servers and VMs for the solution should be in the same subnet. To set up the VMs, you can:
Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor or ESXi VMware on a CSO node or server.
The tool also installs Ubuntu in the VMs.
Manually configure Virtual Route Reflector (VRR) VMs on a CSO node or server, if you use the ESXi VMware VM.
The VMs required on a CSO node or server depend on whether you configure:
A trial environment without high availability (HA). .
A production environment without HA.
A trial environment with HA.
A production environment with HA.
See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each environment.
The following sections describe the procedures for provisioning the VMs:
Before You Begin
Before you begin you must:
Configure the physical servers or node servers and nodes.
The operating system for physical servers must be Ubuntu 14.04.5 LTS.
For a centralized deployment, configure the Contrail Cloud Platform and install Contrail OpenStack.
Downloading the Installer
To download the installer package:
- Log in as root to the central CSO node or server.
The current directory is the home directory.
- Download the appropriate installer package from https://www.juniper.net/support/downloads/?p=cso#sw.
Use the Contrail Service Orchestration installer if you purchased licenses for a centralized deployment or both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.
This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).
Use the Network Service Controller installer if you purchased only Network Service Controller licenses for a distributed deployment or SD-WAN implementation.
This option includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools.
- Expand the installer package, which has a name specific
to its contents and the release. For example, if the name of the installer
package is
csoVersion.tar.gz
:root@host:~/# tar –xvzf csoVersion.tar.gz
The expanded package is a directory that has the same name as the installer package and contains the installation files.
Creating a Bridge Interface for KVM
If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO node or server to a virtual interface. This action enables the VMs to communicate with the network.
To create the bridge interface:
- Log in as root on the central CSO node or server.
- Update the index files of the software packages installed
on the server to reference the latest versions.
root@host:~/# apt-get update
- View the network interfaces configured on the server to
obtain the name of the primary interface on the server.
root@host:~/# ifconfig
- Install the libvirt software.
root@host:~/# apt-get install libvirt-bin
- View the list of network interfaces, which now includes
the virtual interface virbr0.
root@host:~/# ifconfig
- Open the file
/etc/network/interfaces
and modify it to map the primary network interface to the virtual interface virbr0.For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces (5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net
- Modify the default virtual network by customizing the
file
default.xml
:- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
/etc/network/interfaces
- Turn off the Spanning Tree Protocol (STP) option.
- Remove the NAT and DHCP configurations.
For example:
root@host:~/# virsh net-edit default
Before modification:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <ip address='192.168.1.2' netmask='255.255.255.0'> <dhcp> <range start='192.168.1.1' end='192.168.1.254'/> </dhcp> </ip> </network>
After modification:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr0' stp='off' delay='0'/> <ip address='192.168.1.2' netmask='255.255.255.0'> </ip> </network>
- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
- Reboot the physical machine and log in as root again.
- Verify that the primary network interface is mapped to
the virbr0 interface.
root@host:~/# brctl show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2
See also
Creating a Data Interface for a Distributed Deployment
For a distributed deployment, you create a second bridge interface that the VMs use to send data communications to the CPE device.
To create a data interface:
- Log into the central CSO server as root.
- Configure the new virtual interface and map it to a physical
interface.
For example:
root@host:~/# virsh brctl addbr ex: virbr1
root@host:~/# virsh brctl addif virbr1 eth1
- Create an xml file with the name
virbr1.xml
in the directory/var/lib/libvirt/network
. - Paste the following content into the
virbr1.xml
file, and edit the file to match the actual settings for your interface.For example:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr1' stp='off' delay='0'/> <ip address='192.0.2.1' netmask='255.255.255.0'> </ip> </network>
- Open the
/etc/network/interfaces
file and add the details for the second interface.For example:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces (5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto eth1 iface eth1 inet manual up ifconfig eth1 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net auto virbr1 iface virbr1 inet static bridge_ports eth1 address 192.168.1.2 netmask 255.255.255.0
- Reboot the server.
- Verify that the secondary network interface, eth1, is
mapped to the second interface.
root@host:~/# brctl show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2 virbr1 8000.0cc47a010809 no em2 vnet0
- Configure the IP address for the interface.
You do not specify an IP address for the data interface when you create it.
Customizing the Configuration File for the Provisioning Tool
The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.
To customize the configuration file:
- Log in as root to the central CSO node or server.
- Access the confs directory
that contains the example configuration files. For example, if the
name of the installer directory is
csoVersion
root@host:~/# cd csoVersion/confs
- Access the directory for the environment that you want
to configure.
Table 1 shows the directories that contain the example configuration file.
Table 1: Location of Configuration Files for Provisioning VMs
Environment
Directory for Example Configuration File
Trial environment without HA
cso3.3/trial/nonha/provisionvm
Production environment without HA
cso3.3/production/nonha/provisionvm
Trial environment with HA
cso3.3/trial/ha/provisionvm
Production environment with HA
cso3.3/production/ha/provisionvm
- Make a copy of the example configuration file in the
/confs
directory and name itprovision_vm.conf
.For example:
root@host:~/cspVersion/confs# cp /cso3.3/trial/nonha/provisionvm/provision_vm_example.conf provision_vm.conf
- Open the file
provision_vm.conf
with a text editor. - In the [TARGETS] section, specify
the following values for the network on which CSO resides.
installer_ip—IP address of the management interface of the host on which you deployed the installer.
ntp_servers—Comma-separated list of fully qualified domain names (FQDN) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
physical—Comma-separated list of hostnames of the CSO nodes or servers.
virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers.
- Specify the following configuration values for each CSO
node or server that you specified in Step 6.
[hostname]—Hostname of the CSO node or server
management_address—IP address of the Ethernet management (primary) interface in classless Internet domain routing (CIDR) notation
management_interface—Name of the Ethernet management interface, virbr0
gateway—IP address of the gateway for the host
dns_search—Domain for DNS operations
dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network
hostname—Hostname of the node
username—Username for logging in to the node
password—Password for logging in to the node
data_interface—Name of the data interface. Leave blank for a centralized deployment and specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.
- Except for the Junos Space Virtual Appliance and VRR VMs,
specify configuration values for each VM that you specified in Step 6.
[VM name]—Name of the VM
management_address—IP address of the Ethernet management interface in CIDR notation
hostname—Fully qualified domain name (FQDN) of the VM
username—Login name of user who can manage all VMs
password—Password for user who can manage all VMs
local_user—Login name of user who can manage this VM
local_password—Password for user who can manage this VM
guest_os—Name of the operating system
host_server—Hostname of the CSO node or server
memory—Required amount of RAM in GB
vCPU—Required number of virtual central processing units (vCPUs)
enable_data_interface—True enables the VM to transmit data and false prevents the VM from transmitting data. The default is false.
- For the Junos Space VM, specify configuration values for
each VM that you specified in Step 6.
[VM name]—Name of the VM.
management_address—IP address of the Ethernet management interface in CIDR notation.
web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)
gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the CSO node or server that hosts the VM.
nameserver_address—IP address of the DNS nameserver.
hostname—FQDN of the VM.
username—Username for logging in to Junos Space.
password—Default password for logging in to Junos Space.
newpassword—Password that you provide when you configure the Junos Space appliance.
guest_os—Name of the operating system.
host_server—Hostname of the CSO node or server.
memory—Required amount of RAM in GB.
vCPU—Required number of virtual central processing units (vCPUs).
- Save the file.
- Run the following command to start virtual machines.
root@host:~/# ./provision_vm.sh
The following examples show customized configuration files for the different deployments:
Trial environment without HA (see Sample Configuration File for Provisioning VMs in a Trial Environment without HA).
Production environment without HA (see Sample Configuration File for Provisioning VMs in a Production Environment Without HA).
Trial environment with HA (see Sample Configuration File for Provisioning VMs in a Trial Environment with HA).
Production environment with HA (see Sample Configuration File for Provisioning VMs in a Production Environment with HA).
Sample Configuration File for Provisioning VMs in a Trial Environment without HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host # The list of virtual servers to be provisioned. server = csp-central-infravm, csp-central-msvm, csp-central-k8mastervm, csp-regional-infravm, csp-regional-msvm, csp-regional-k8mastervm, csp-installer-vm, csp-contrailanalytics-1, csp-vrr-vm, csp-regional-sblb # Physical Server Details [cso-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host username = root password = passw0rd data_interface = # VM Details [csp-central-infravm] management_address = 192.168.1.4/24 hostname = centralinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 49152 vcpu = 8 enable_data_interface = false [csp-central-msvm] management_address = 192.168.1.5/24 hostname = centralmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 49152 vcpu = 8 enable_data_interface = false [csp-central-k8mastervm] management_address = 192.168.1.14/24 hostname = centralk8mastervm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 8192 vcpu = 4 enable_data_interface = false [csp-regional-infravm] management_address = 192.168.1.6/24 hostname = regionalinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 24576 vcpu = 4 enable_data_interface = false [csp-regional-msvm] management_address = 192.168.1.7/24 hostname = regionalmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 24576 vcpu = 4 enable_data_interface = false [csp-regional-k8mastervm] management_address = 192.168.1.15/24 hostname = regionalk8mastervm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 8192 vcpu = 4 enable_data_interface = false [csp-installer-vm] management_address = 192.168.1.10/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 24576 vcpu = 4 enable_data_interface = false [csp-contrailanalytics-1] management_address = 192.168.1.11/24 hostname = canvm.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 49152 vcpu = 8 enable_data_interface = false [csp-regional-sblb] management_address = 192.168.1.12/24 hostname = regional-sblb.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 8192 vcpu = 4 enable_data_interface = true [csp-vrr-vm] management_address = 192.168.1.13/24 hostname = vrr.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host memory = 8192 vcpu = 4 [csp-space-vm] management_address = 192.168.1.14/24 web_address = 192.168.1.15/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host memory = 16384 vcpu = 4
Sample Configuration File for Provisioning VMs in a Production Environment Without HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host, cso-regional-host # Note: Central and Regional physical servers are used as "csp-central-ms" and "csp-regional-ms" servers. # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list. server = csp-central-infravm, csp-regional-infravm, csp-installer-vm, csp-space-vm, csp-contrailanalytics-1, csp-central-elkvm, csp-regional-elkvm, csp-central-msvm, csp-regional-msvm, csp-vrr-vm, csp-regional-sblb # Physical Server Details [cso-central-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host username = root password = passw0rd data_interface = [cso-regional-host] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host username = root password = passw0rd data_interface = [csp-contrailanalytics-1] management_address = 192.168.1.9/24 management_interface = hostname = canvm.example.net username = root password = passw0rd vm = false # VM Details [csp-central-infravm] management_address = 192.168.1.4/24 hostname = centralinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-infravm] management_address = 192.168.1.5/24 hostname = regionalinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 65536 vcpu = 16 enable_data_interface = false [csp-space-vm] management_address = 192.168.1.6/24 web_address = 192.168.1.7/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-regional-host memory = 32768 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.8/24 hostname = installer.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 65536 vcpu = 4 enable_data_interface = false [csp-central-elkvm] management_address = 192.168.1.10/24 hostname = centralelkvm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-elkvm] management_address = 192.168.1.11/24 hostname = regionalelkvm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-msvm] management_address = 192.168.1.12/24 hostname = centralmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-msvm] management_address = 192.168.1.13/24 hostname = regionalmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-sblb] management_address = 192.168.1.14/24 hostname = regional-sblb.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 32768 vcpu = 4 enable_data_interface = true [csp-vrr-vm] management_address = 192.168.1.15/24 hostname = vrr.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host memory = 8192 vcpu = 4
Sample Configuration File for Provisioning VMs in a Trial Environment with HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host1, cso-host2, cso-host3 # The list of virtual servers to be provisioned. server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3, csp-contrailanalytics-1, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1, csp-regional-lbvm2, csp-regional-lbvm3, csp-space-vm, csp-installer-vm, csp-vrr-vm1, csp-vrr-vm2, csp-regional-sblb1, csp-regional-sblb2 # Physical Server Details [cso-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host1 username = root password = passw0rd data_interface = [cso-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host2 username = root password = passw0rd data_interface = [cso-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host3 username = root password = passw0rd data_interface = # VM Details [csp-central-infravm1] management_address = 192.168.1.5/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-infravm2] management_address = 192.168.1.6/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-infravm3] management_address = 192.168.1.7/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-msvm1] management_address = 192.168.1.8/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 65536 vcpu = 8 enable_data_interface = false [csp-central-msvm2] management_address = 192.168.1.9/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 65536 vcpu = 8 enable_data_interface = false [csp-central-msvm3] management_address = 192.168.1.9/24 hostname = centralmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 65536 vcpu = 8 enable_data_interface = false [csp-regional-infravm1] management_address = 192.168.1.10/24 hostname = regionalinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-infravm2] management_address = 192.168.1.11/24 hostname = regionalinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-infravm3] management_address = 192.168.1.12/24 hostname = regionalinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-msvm1] management_address = 192.168.1.13/24 hostname = regionalmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 8 enable_data_interface = false [csp-regional-msvm2] management_address = 192.168.1.14/24 hostname = regionalmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 8 enable_data_interface = false [csp-regional-msvm3] management_address = 192.168.1.14/24 hostname = regionalmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 32768 vcpu = 8 enable_data_interface = false [csp-space-vm] management_address = 192.168.1.15/24 web_address = 192.168.1.16/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host3 memory = 16384 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.17/24 hostname = installervm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 49152 vcpu = 4 enable_data_interface = false [csp-contrailanalytics-1] management_address = 192.168.1.18/24 hostname = can1.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 16 enable_data_interface = false [csp-central-lbvm1] management_address = 192.168.1.19/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 enable_data_interface = false [csp-central-lbvm2] management_address = 192.168.1.20/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 16384 vcpu = 4 enable_data_interface = false [csp-central-lbvm3] management_address = 192.168.1.20/24 hostname = centrallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 16384 vcpu = 4 enable_data_interface = false [csp-regional-lbvm1] management_address = 192.168.1.21/24 hostname = regionallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 enable_data_interface = false [csp-regional-lbvm2] management_address = 192.168.1.22/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 16384 vcpu = 4 enable_data_interface = false [csp-regional-lbvm3] management_address = 192.168.1.22/24 hostname = regionallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 16384 vcpu = 4 enable_data_interface = false [csp-vrr-vm1] management_address = 192.168.1.23/24 hostname = vrr1.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host3 memory = 8192 vcpu = 4 [csp-vrr-vm2] management_address = 192.168.1.24/24 hostname = vrr2.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host3 memory = 8192 vcpu = 4 [csp-regional-sblb1] management_address = 192.168.1.25/24 hostname = regional-sblb1.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 24576 vcpu = 4 enable_data_interface = true [csp-regional-sblb2] management_address = 192.168.1.26/24 hostname = regional-sblb2.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 24576 vcpu = 4 enable_data_interface = true
Sample Configuration File for Provisioning VMs in a Production Environment with HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host1, cso-central-host2, cso-central-host3, cso-regional-host1, cso-regional-host2, cso-regional-host3 # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list. server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1, csp-regional-lbvm2, csp-regional-lbvm3, csp-space-vm, csp-installer-vm, csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3, csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-regional-elkvm1, csp-regional-elkvm2, csp-regional-elkvm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3, csp-vrr-vm1, csp-vrr-vm2, csp-regional-sblb1, csp-regional-sblb2, csp-regional-sblb3 # Physical Server Details [cso-central-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host1 username = root password = passw0rd data_interface = [cso-central-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host2 username = root password = passw0rd data_interface = [cso-central-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host3 username = root password = passw0rd data_interface = [cso-regional-host1] management_address = 192.168.1.5/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host1 username = root password = passw0rd data_interface = [cso-regional-host2] management_address = 192.168.1.6/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host2 username = root password = passw0rd data_interface = [cso-regional-host3] management_address = 192.168.1.7/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host3 username = root password = passw0rd data_interface = [csp-contrailanalytics-1] management_address = 192.168.1.17/24 management_interface = hostname = can1.example.net username = root password = passw0rd vm = false [csp-contrailanalytics-2] management_address = 192.168.1.18/24 management_interface = hostname = can2.example.net username = root password = passw0rd vm = false [csp-contrailanalytics-3] management_address = 192.168.1.19/24 management_interface = hostname = can3.example.net username = root password = passw0rd vm = false # VM Details [csp-central-infravm1] management_address = 192.168.1.8/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 65536 vcpu = 16 enable_data_interface = false [csp-central-infravm2] management_address = 192.168.1.9/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 65536 vcpu = 16 enable_data_interface = false [csp-central-infravm3] management_address = 192.168.1.10/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-infravm1] management_address = 192.168.1.11/24 hostname = regionalinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-infravm2] management_address = 192.168.1.12/24 hostname = regionalinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-infravm3] management_address = 192.168.1.13/24 hostname = regionalinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 65536 vcpu = 16 enable_data_interface = false [csp-space-vm] management_address = 192.168.1.14/24 web_address = 192.168.1.15/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-central-host2 memory = 32768 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.16/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-lbvm1] management_address = 192.168.1.20/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-lbvm2] management_address = 192.168.1.21/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-lbvm3] management_address = 192.168.1.22/24 hostname = centrallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-lbvm1] management_address = 192.168.1.23/24 hostname = regionallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-lbvm2] management_address = 192.168.1.24/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-lbvm3] management_address = 192.168.1.25/24 hostname = regionallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-elkvm1] management_address = 192.168.1.26/24 hostname = centralelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-elkvm2] management_address = 192.168.1.27/24 hostname = centralelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-elkvm3] management_address = 192.168.1.28/24 hostname = centralelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-elkvm1] management_address = 192.168.1.29/24 hostname = regionalelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-elkvm2] management_address = 192.168.1.30/24 hostname = regionalelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = false [csp-regional-elkvm3] management_address = 192.168.1.31/24 hostname = regionalelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 32768 vcpu = 4 enable_data_interface = false [csp-central-msvm1] management_address = 192.168.1.32/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 65536 vcpu = 16 enable_data_interface = false [csp-central-msvm2] management_address = 192.168.1.33/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 65536 vcpu = 16 enable_data_interface = false [csp-central-msvm3] management_address = 192.168.1.34/24 hostname = centralmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-msvm1] management_address = 192.168.1.35/24 hostname = regionalmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-msvm2] management_address = 192.168.1.36/24 hostname = regionalmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-msvm3] management_address = 192.168.1.37/24 hostname = regionalmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 65536 vcpu = 16 enable_data_interface = false [csp-regional-sblb1] management_address = 192.168.1.38/24 hostname = regional-sblb1.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-sblb2] management_address = 192.168.1.39/24 hostname = regional-sblb2.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-sblb3] management_address = 192.168.1.40/24 hostname = regional-sblb3.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 32768 vcpu = 4 enable_data_interface = true [csp-vrr-vm1] management_address = 192.168.1.41/24 hostname = vrr1.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host3 memory = 32768 vcpu = 4 [csp-vrr-vm2] management_address = 192.168.1.42/24 hostname = vrr2.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host2 memory = 32768 vcpu = 4
Provisioning VMs with the Provisioning Tool for the KVM Hypervisor
If you use the KVM hypervisor on the CSO node or server, you can use the provisioning tool to:
Create and configure the VMs for the CSO and Junos Space components.
Install the operating system in the VMs:
Ubuntu in the CSO VMs
Junos Space Network Management Platform software in the Junos Space VM
To provision VMs with the provisioning tool:
- Log in as root to the central CSO node or server.
- Access the directory for the installer. For example, if
the name of the installer directory is
csoVersion
:root@host:~/# cd /~/csoVersion/
- Run the provisioning tool.
root@host:~/cspVersion/# ./provision_vm.sh
The provisioning begins.
- During installation, observe detailed messages in the
log files about the provisioning of the VMs.
provision_vm.log
—Contains details about the provisioning processprovision_vm_console.log
—Contains details about the VMsprovision_vm_error.log
—Contains details about errors that occur during provisioning
For example:
root@host:~/cspVersion/# cd logs
root@host:/cspVersion/logs/# tail -f LOGNAME
Provisioning VMware ESXi VMs Using the Provisioning Tool
If you use the VMware ESXi (Version 6.0) VMs on the CSO node or server, you can use the provisioning tool—that is, provision_vm_ESXI.sh—to create and configure VMs for CSO.
You cannot provision a Virtual Route Reflector (VRR) VM using the provisioning tool. You must provision the VRR VM manually.
Before you begin, ensure that the maximum supported file size for datastore in a VMware ESXi is greater than 512 MB. To view the maximum supported file size in datastore, you can establish an SSH session to the ESXi host and run the vmfkstools -P datastorePath command.
To provision VMware ESXi VMs using the provisioning tool:
- Download the CSO Release 3.3 installer package from the Software Downloads page to the local drive.
- Log in as root to the Ubuntu VM with the kernel version
4.4.0-31-generic, and has access to the internet. The VM must have
the following specifications:
8 GB RAM
2 vCPUs
- Copy the installer package from your local drive to the
VM.
root@host:~/# scp Contrail_Service_Orchestration_3.3.tar.gz root@VM :/root
- On the VM, extract the installer package.
For example, if the name of the installer package is Contrail_Service_Orchestration_3.3.tar.gz,
root@host:~/# tar –xvzf Contrail_Service_Orchestration_3.3.tar.gz
The contents of the installer package are extracted in a directory with the same name as the installer package.
- Navigate to the
confs
directory in the VM.For example:
root@host:~/# cd Contrail_Service_Orchestration_3.3/confs
root@host:~/Contrail_Service_Orchestration_3.3/confs#
- Make a copy of the example configuration file, provision_vm_example_ESXI.conf,
that is available in the
confs
directory and rename it provision_vm_ESXI.conf.For example:
root@host:~/Contrail_Service_Orchestration_3.3/confs# cp /cso3.3/trial/nonha/provisionvm/provision_vm_example_ESXI.conf provision_vm.conf
- Open the
provision_vm.conf
file with a text editor. - In the [TARGETS] section,
specify the following values for the network on which CSO resides.
installer_ip—IP address of the management interface of the VM on which you are running the provisioning script.
ntp_servers—Comma-separated list of fully qualified domain names (FQDN) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
You need not edit the following values:
physical—Comma-separated list of hostnames of the CSO nodes or servers are displayed.
virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers are displayed.
- Specify the following configuration values for each ESXI
host on the CSO node or server.
management_address—IP address of the Ethernet management (primary) interface in classless Internet domain routing (CIDR) notation of the VM network. For example, 192.168.1.2/24.
gateway—Gateway IP address of the VM network
dns_search—Domain for DNS operations
dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network
hostname—Hostname of the VMware ESXi host
username—Username for logging in to the VMware ESXi host
password—Password for logging in to the VMware ESXi host
vmnetwork—Labels for each virtual network adapter. This label is used to identify the physical network that is associated to a virtual network adapter.
The vmnetwork data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify vmnetwork data within double quotes.
datastore—Datastore value to save all VMs files.
The datastore data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify datastore data within double quotes.
- Save the
provision_vm.conf
file. - Run the
provision_vm_ESXI.sh
script to create the VMs.root@host:~/Contrail_Service_Orchestration_3.3/# ./provision_vm_ESXI.sh
- Copy
provision_vm.conf
file in the installer VM.For example:
root@host:~/Contrail_Service_Orchestration_3.3/#scp confs/provision_vm.conf root@installer_VM_IP:/root/Contrail_Service_Orchestration_3.3/confs
This action brings up VMware ESXi VMs with the configuration provided in the files.
Manually Provisioning VRR VMs on the Contrail Service Orchestration Node or Server
You cannot use the provision tool—provision_vm_ESXI.sh—to provision the Virtual Route Reflector (VRR) VM. You must manually provision the VRR VM.
To manually provision the VRR VM:
- Download the VRR Release 15.1F6-S7 software package (.ova format) for VMware from the Virtual Route Reflector page, to a location accessible to the server.
- Launch the VRR using vSphere or vCenter Client for your ESXi server and log in to the server with your credentials.
- Set up an SSH session to the VRR VM.
- Execute the following commands:
root@host:~/# configure
root@host:~/# delete groups global system services ssh root-login deny-password
root@host:~/# set system root-authentication plain-text-password
root@host:~/# set system services ssh
root@host:~/# set system services netconf ssh
root@host:~/# set routing-options rib inet.3 static route 0.0.0.0/0 discard
root@host:~/# commit
root@host:~/# exit
Verifying Connectivity of the VMs
From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.
If the VMs cannot communicate with all the other hosts in the deployment, the installation can fail.