Provisioning VMs on Contrail Service Orchestration Nodes or Servers
Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO) nodes or servers host the infrastructure services and some components. All servers and VMs for the solution should be in the same subnet.
Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor or VMware ESXi on a CSO node or server.
The VMs created by provisioning tool have Ubuntu preinstalled.
If you use the KVM hypervisor while installing a Distributed CPE (Hybrid WAN) or an SD-WAN solution, you must create a bridge interface on the physical server. The bridge interface should map the primary network interface (Ethernet management interface) on each CSO server node or server to a virtual interface before you create VMs. This action enables the VMs to communicate with the network.
This approach is applicable only if you are installing a Distributed CPE (Hybrid WAN) or an SD-WAN solution. It is not required for a centralized solution.
The VMs required on a CSO node or server depend on whether you configure:
Small deployment. (See Sample Configuration File for Provisioning VMs in a Small Deployment)
Medium deployment. See (Sample Configuration File for Provisioning VMs in a Medium Deployment)
Large deployment. See (Sample Configuration File for Provisioning VMs in a Large Deployment)
The small and medium deployments are always region-less deployment whereas the large deployment is always region-based deployment.
See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each deployment.
The following sections describe the procedures for provisioning the VMs:
Before You Begin
Before you begin you must:
Configure the physical servers or node servers and nodes.
Install Ubuntu 14.04.5 LTS as the operating system for the physical servers.
Configure the Contrail Cloud Platform and install Contrail OpenStack if you are performing a centralized CPE deployment.
Downloading the Installer
To download the installer package:
- Log in as root to the central CSO node or server.
The current directory is the home directory.
- Download the appropriate installer package from the CSO Download page.
Use the Contrail Service Orchestration installer if you have purchased licenses for a centralized deployment or both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.
This installer includes all the Contrail Service Orchestration graphical user interfaces (GUIs).
Use the Network Service Controller installer if you have purchased only Network Service Controller licenses for a distributed deployment or SD-WAN implementation.
This installer includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools.
- Expand the installer package, which has a name specific
to its contents and the release. For example, if the name of the installer
package is
csoVersion.tar.gz
:root@host:~/# tar –xvzf csoVersion.tar.gz
The expanded package is a directory that has the same name as the installer package and contains the installation files.
Creating a Bridge Interface for KVM
If you use the KVM hypervisor, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO server node or server to a virtual interface before you create VMs. This action enables the VMs to communicate with the network.
A physical server or node needs Internet access to install the libvirt-bin package.
To create the bridge interface:
- Log in as root on the central CSO node or server.
- Update the index files of the software packages installed
on the server to reference the latest versions.
root@host:~/# apt-get update
- View the network interfaces configured on the server to
obtain the name of the primary interface on the server.
root@host:~/# ifconfig
- Install the libvirt software.
root@host:~/# apt-get install libvirt-bin
- View the list of network interfaces, which now includes
the virtual interface virbr0.
root@host:~/# ifconfig
- Open the
/etc/network/interfaces
file and modify it to map the primary network interface to the virtual interface virbr0.For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces (5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net
- Reboot the physical machine and log in as root again.
- Verify that the primary network interface is mapped to
the virbr0 interface.
root@host:~/# brctl show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2
Creating a Data Interface for a Distributed Deployment
For a distributed deployment on KVM hypervisor, you create a second bridge interface that the VMs use to send data communications to the CPE device.
A physical server or node needs Internet access to install libvirt-bin package.
To create a data interface:
- Log in to the central CSO server as root.
- Configure the new virtual interface and map it to a physical
interface.
For example:
root@host:~/# virsh brctl addbr virbr1
root@host:~/# virsh brctl addif virbr1 eth1
- Create a file with the name
virbr1.xml
in the/var/lib/libvirt/network
directory. - Paste the following content into the
virbr1.xml
file, and edit the file to match the actual settings for your interface.For example:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr1' stp='off' delay='0'/> <ip address='192.0.2.1' netmask='255.255.255.0'> </ip> </network>
- Open the
/etc/network/interfaces
file and add the details for the second interface.For example:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces (5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto eth1 iface eth1 inet manual up ifconfig eth1 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net auto virbr1 iface virbr1 inet static bridge_ports eth1 address 192.0.2.1 netmask 255.255.255.0
- Reboot the server.
- Verify that the secondary network interface, eth1, is
mapped to the second interface.
root@host:~/# brctl show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2 virbr1 8000.0cc47a010809 no em2 vnet0
Customizing the Configuration File for the Provisioning Tool
The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.
To customize the configuration file:
- Log in as root to the central CSO node or server.
- Access the confs directory
that contains the sample configuration files. For example, if the
name of the installer directory is
csoVersion
.root@host:~/# cd csoVersion/confs
- Access the directory for the environment that you want
to configure.
Table 1 shows the directories that contain the sample configuration file.
Table 1: Location of Configuration Files for Provisioning VMs
Deployment
Directory for Sample Configuration File
Small deployment
confs/cso4.1.0/production/nonha/provisionvm/provision_vm_collocated_example.conf
Medium deployment
confs/cso4.1.0/production/ha/provisionvm/provision_vm_collocated_example.conf
Large deployment
confs/cso4.1.0/production/ha/provisionvm/provision_vm_example.conf
- Make a copy of the sample configuration file in the
/confs
directory and name itprovision_vm.conf
.For example:
root@host:~/cspVersion# cp confs/cso4.1.0/production/nonha/provisionvm/provision_vm_collocated_example.conf provision_vm.conf
- Open the
provision_vm.conf
file with a text editor. - In the [TARGETS] section, specify
the following values for the network on which CSO resides.
installer_ip—IP address of the management interface of the host on which you deployed the installer.
ntp_servers—Comma-separated list of fully qualified domain names (FQDNs) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
physical—Comma-separated list of hostnames of the CSO nodes or servers.
virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers.
- Specify the following configuration values for each CSO
node or server that you specified in Step 6.
[hostname]—Hostname of the CSO node or server.
management_address—IP address of the Ethernet management (primary) interface in classless Interdomain routing (CIDR) notation.
management_interface—Name of the Ethernet management interface, virbr0.
gateway—IP address of the gateway for the host.
dns_search—Domain for DNS operations.
dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network.
hostname—Hostname of the node.
username—Username for logging in to the node.
password—Password for logging in to the node.
data_interface—Name of the data interface. Leave blank for a centralized deployment. Specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.
- Specify configuration values for each VM that you specified
in Step 6.
[VM name]—Name of the VM.
management_address—IP address of the Ethernet management interface in CIDR notation.
hostname—Fully qualified domain name (FQDN) of the VM.
username—Login name of user who can manage all VMs.
password—Password for user who can manage all VMs.
local_user—Login name of user who can manage this VM.
local_password—Password for user who can manage this VM.
guest_os—Name of the operating system.
host_server—Hostname of the CSO node or server.
memory—Required amount of RAM in GB.
vCPU—Required number of virtual central processing units (vCPUs).
enable_data_interface—True enables the VM to transmit data and false prevents the VM from transmitting data. The default is false.
- For the Junos Space VM, specify configuration values for
each VM that you specified in Step 6.
[VM name]—Name of the VM.
management_address—IP address of the Ethernet management interface in CIDR notation.
web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)
gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the CSO node or server that hosts the VM.
nameserver_address—IP address of the DNS nameserver.
hostname—FQDN of the VM.
username—Username for logging in to Junos Space.
password—Default password for logging in to Junos Space.
newpassword—Password that you provide when you configure the Junos Space appliance.
guest_os—Name of the operating system.
host_server—Hostname of the CSO node or server.
memory—Required amount of RAM in GB.
vCPU—Required number of virtual central processing units (vCPUs).
vm_type—Preinstalled OSS with VM types of baseInfra/baseMS.
volumes—Data partitions of CSO (Eg: /mnt/data:400G.
base_disk_size—OS partitions of CSO (Eg: 100G).
- Save the file.
- Download the ESXi-4.1.0.tar.gz file to the
/root/Contrail_Service_Orchestration_4.1.0/artifacts
folder on the Ubuntu VM.or
Download the KVM-4.1.0.tar.gz file to the
/root/Contrail_Service_Orchestration_4.1.0/artifacts
folder on the Ubuntu VM. - Run the following command to start virtual machines:
For KVM:
root@host:~/# ./provision_vm.sh
For ESXi:
root@host:~/# ./provision_vm_ESXI.sh
The following sections show examples of customized configuration files for for a small, a medium, and a large deployment.
Sample Configuration File for Provisioning VMs in a Small Deployment
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host # The list of virtual servers to be provisioned. Don't change the below server names server = csp-central-infravm, csp-central-msvm, csp-central-k8mastervm, csp-installer-vm, csp-contrailanalytics-1, csp-vrr-vm, csp-regional-sblb # Physical Server Details [cso-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host username = root password = passw0rd data_interface = # VM Details [csp-central-infravm] management_address = 192.168.1.4/24 hostname = centralinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 49152 vcpu = 8 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm] management_address = 192.168.1.5/24 hostname = centralmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 32768 vcpu = 6 enable_data_interface = false vm_type = baseMS volumes = swap:32G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-k8mastervm] management_address = 192.168.1.14/24 hostname = centralk8mastervm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 4096 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-installer-vm] management_address = 192.168.1.10/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 8192 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:24G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-contrailanalytics-1] management_address = 192.168.1.11/24 hostname = canvm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 49152 vcpu = 8 enable_data_interface = false vm_type = volumes = swap:48G,/data1:1G,/data2:1G base_disk_size = 500G [csp-regional-sblb] management_address = 192.168.1.12/24 hostname = regional-sblb.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 4096 vcpu = 2 enable_data_interface = true vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 500G [csp-vrr-vm] management_address = 192.168.1.13/24 hostname = vrr.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host memory = 8192 vcpu = 4 [csp-space-vm] management_address = 192.168.1.14/24 web_address = 192.168.1.15/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host memory = 16384 vcpu = 4
Sample Configuration File for Provisioning VMs in a Medium Deployment
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host1, cso-host2, cso-host3 # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list. server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-installer-vm, csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3, csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-sblb1, csp-regional-sblb2, csp-vrr-vm1, csp-vrr-vm2 # Physical Server Details [cso-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host1 username = root password = passw0rd data_interface = [cso-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host2 username = root password = passw0rd data_interface = [cso-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host3 username = root password = passw0rd data_interface = # VM Details [csp-central-infravm1] management_address = 192.168.1.6/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-infravm2] management_address = 192.168.1.7/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-infravm3] management_address = 192.168.1.8/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-installer-vm] management_address = 192.168.1.9/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:24G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-lbvm1] management_address = 192.168.1.10/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-lbvm2] management_address = 192.168.1.11/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G # Lbvm3 is running only k8master [csp-central-lbvm3] management_address = 192.168.1.12/24 hostname = centrallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 4096 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm1] management_address = 192.168.1.13/24 hostname = centralelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 8192 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm2] management_address = 192.168.1.14/24 hostname = centralelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 8192 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm3] management_address = 192.168.1.15/24 hostname = centralelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 8192 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm1] management_address = 192.168.1.16/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm2] management_address = 192.168.1.17/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm3] management_address = 192.168.1.18/24 hostname = centralmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-sblb1] management_address = 192.168.1.19/24 hostname = sblb1.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 4096 vcpu = 2 enable_data_interface = true vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-sblb2] management_address = 192.168.1.20/24 hostname = sblb2.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 4096 vcpu = 2 enable_data_interface = true vm_type = volumes = swap:4G,/data1:1G,/data2:1G base_disk_size = 500G [csp-contrailanalytics-1] management_address = 192.168.1.21/24 hostname = can1.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 65536 vcpu = 14 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-contrailanalytics-2] management_address = 192.168.1.22/24 hostname = can2.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 65536 vcpu = 14 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-contrailanalytics-3] management_address = 192.168.1.23/24 hostname = can3.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 65536 vcpu = 14 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-vrr-vm1] management_address = 192.168.1.24/24 hostname = vrr1.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host2 memory = 16384 vcpu = 4 [csp-vrr-vm2] management_address = 192.168.1.25/24 hostname = vrr2.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-host3 memory = 16384 vcpu = 4 [csp-space-vm] management_address = 192.168.1.28/24 web_address = 192.168.1.29/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host2 memory = 32768 vcpu = 4
Sample Configuration File for Provisioning VMs in a Large Deployment
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host1, cso-central-host2, cso-central-host3, cso-central-host4, cso-regional-host1, cso-regional-host2, cso-regional-host3 # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list. server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1, csp-regional-lbvm2, csp-regional-lbvm3, csp-installer-vm, csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3, csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-regional-elkvm1, csp-regional-elkvm2, csp-regional-elkvm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3, csp-regional-sblb1, csp-regional-sblb2, csp-vrr-vm1, csp-vrr-vm2, csp-vrr-vm3, csp-vrr-vm4, csp-vrr-vm5, csp-vrr-vm6 # Physical Server Details [cso-central-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host1 username = root password = passw0rd data_interface = [cso-central-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host2 username = root password = passw0rd data_interface = [cso-central-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host3 username = root password = passw0rd data_interface = [cso-central-host4] management_address = 192.168.1.5/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host4 username = root password = passw0rd data_interface = [cso-regional-host1] management_address = 192.168.1.6/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host1 username = root password = passw0rd data_interface = [cso-regional-host2] management_address = 192.168.1.7/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host2 username = root password = passw0rd data_interface = [cso-regional-host3] management_address = 192.168.1.8/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host3 username = root password = passw0rd data_interface = # VM Details [csp-central-infravm1] management_address = 192.168.1.9/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-infravm2] management_address = 192.168.1.10/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host4 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-infravm3] management_address = 192.168.1.11/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-infravm1] management_address = 192.168.1.12/24 hostname = regionalinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-infravm2] management_address = 192.168.1.13/24 hostname = regionalinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-infravm3] management_address = 192.168.1.14/24 hostname = regionalinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 49152 vcpu = 12 enable_data_interface = false vm_type = baseInfra volumes = swap:48G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-space-vm] management_address = 192.168.1.15/24 web_address = 192.168.1.15/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-central-host2 memory = 32768 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.16/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 24576 vcpu = 4 enable_data_interface = false vm_type = volumes = swap:32G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-contrailanalytics-1] management_address = 192.168.1.17/24 hostname = can1.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 65536 vcpu = 24 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-contrailanalytics-2] management_address = 192.168.1.18/24 hostname = can2.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 65536 vcpu = 24 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-contrailanalytics-3] management_address = 192.168.1.19/24 hostname = can3.example.net username = root password = passw0rd local_user = canvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host4 memory = 65536 vcpu = 24 enable_data_interface = false vm_type = volumes = swap:64G,/data1:1G,/data2:1G base_disk_size = 500G [csp-central-lbvm1] management_address = 192.168.1.20/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-lbvm2] management_address = 192.168.1.21/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-lbvm3] management_address = 192.168.1.22/24 hostname = centrallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 4096 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-lbvm1] management_address = 192.168.1.23/24 hostname = regionallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-lbvm2] management_address = 192.168.1.24/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 6144 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:6G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-lbvm3] management_address = 192.168.1.25/24 hostname = regionallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 4096 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm1] management_address = 192.168.1.26/24 hostname = centralelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm2] management_address = 192.168.1.27/24 hostname = centralelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-elkvm3] management_address = 192.168.1.28/24 hostname = centralelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-elkvm1] management_address = 192.168.1.29/24 hostname = regionalelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-elkvm2] management_address = 192.168.1.30/24 hostname = regionalelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-elkvm3] management_address = 192.168.1.31/24 hostname = regionalelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 8192 vcpu = 2 enable_data_interface = false vm_type = volumes = swap:8G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm1] management_address = 192.168.1.32/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm2] management_address = 192.168.1.33/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host4 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-central-msvm3] management_address = 192.168.1.34/24 hostname = centralmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-msvm1] management_address = 192.168.1.35/24 hostname = regionalmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-msvm2] management_address = 192.168.1.36/24 hostname = regionalmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-msvm3] management_address = 192.168.1.37/24 hostname = regionalmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 38912 vcpu = 10 enable_data_interface = false vm_type = baseMS volumes = swap:38G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-sblb1] management_address = 192.168.1.38/24 hostname = regional-sblb1.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 4096 vcpu = 2 enable_data_interface = true vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-regional-sblb2] management_address = 192.168.1.39/24 hostname = regional-sblb2.example.net username = root password = passw0rd local_user = sblb local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 4096 vcpu = 2 enable_data_interface = true vm_type = volumes = swap:4G,/mnt/data:400G,/data2:1G base_disk_size = 100G [csp-vrr-vm1] management_address = 192.168.1.40/24 hostname = vrr1.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-central-host1 memory = 16384 vcpu = 4 [csp-vrr-vm2] management_address = 192.168.1.41/24 hostname = vrr2.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-central-host2 memory = 16384 vcpu = 4 [csp-vrr-vm3] management_address = 192.168.1.42/24 hostname = vrr3.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-central-host3 memory = 16384 vcpu = 4 [csp-vrr-vm4] management_address = 192.168.1.43/24 hostname = vrr4.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host1 memory = 16384 vcpu = 4 [csp-vrr-vm5] management_address = 192.168.1.44/24 hostname = vrr4.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host2 memory = 16384 vcpu = 4 [csp-vrr-vm6] management_address = 192.168.1.45/24 hostname = vrr4.example.net gateway = 192.168.1.1 newpassword = passw0rd guest_os = vrr host_server = cso-regional-host3 memory = 16384 vcpu = 4
Provisioning VMs with the Provisioning Tool for the KVM Hypervisor
If you use the KVM hypervisor for the CSO node or server, you can use the provisioning tool to create and configure the VMs for the CSO and Junos Space components.
The VMs provisioned by the tool will have pre-installed Ubuntu and Junos Space Network Management Platform software in the Junos Space VM.
To provision VMs with the provisioning tool:
- Log in as root to the central CSO node or server.
- Access the directory for the installer. For example, if
the name of the installer directory is
csoVersion
:root@host:~/# cd /~/csoVersion/
- Run the provisioning tool.
root@host:~/cspVersion/# ./provision_vm.sh
The provisioning begins.
- During installation, observe detailed messages in the
log files about the provisioning of the VMs.
provision_vm.log
—Contains details about the provisioning processprovision_vm_console.log
—Contains details about the VMsprovision_vm_error.log
—Contains details about errors that occur during provisioning
For example:
root@host:~/cspVersion/# cd logs
root@host:/cspVersion/logs/# tail -f LOGNAME
Provisioning VMware ESXi VMs Using the Provisioning Tool
If you use the VMware ESXi (Version 6.0) VMs on the CSO node
or server, you can use the provisioning tool—that is, provision_vm_ESXI.sh
—to create and configure VMs for CSO.
You cannot provision a Virtual Route Reflector (VRR) VM by using the provisioning tool. You must provision the VRR VM manually.
Before you begin, ensure that the maximum supported file size for a datastore in a VMware ESXi is greater than 512 MB. To view the maximum supported file size in datastore, you can establish an SSH session with the ESXi host and run the vmfkstools -P datastorePath command.
To provision VMware ESXi VMs using the provisioning tool:
- Download the CSO Release 4.1.0 installer package from the CSO Download page to the local drive.
- Log in as root to the Ubuntu VM with Internet access and kernel version 4.4.0-31-generic.
The VM must have the following specifications:
8 GB RAM
2 vCPUs
- Copy the installer package from your local drive to the
VM.
root@host:~/# scp Contrail_Service_Orchestration_4.1.0.tar.gz root@VM :/root
- On the VM, extract the installer package.
For example, if the name of the installer package is Contrail_Service_Orchestration_4.1.0.tar.gz:
root@host:~/# tar –xvzf Contrail_Service_Orchestration_4.1.0.tar.gz:
The contents of the installer package are extracted in a directory with the same name as the installer package.
- Navigate to the
confs
directory in the VM.For example:
root@host:~/# cd Contrail_Service_Orchestration_4.1.0/confs
root@host:~/Contrail_Service_Orchestration_4.1.0/confs#
- Make a copy of the sample configuration file, provision_vm_example_ESXI.conf,
that is available in the
confs
directory and rename it provision_vmI.conf.For example:
root@host:~/Contrail_Service_Orchestration_4.1# cp /confs/cso4.1.0/production/nonha/provisionvm/provision_vm_collocated_ESXI.conf provision_vm.conf
- Open the
provision_vm.conf
file with a text editor. - In the [TARGETS] section,
specify the following values for the network on which CSO resides.
installer_ip—IP address of the management interface of the VM on which you are running the provisioning script.
ntp_servers—Comma-separated list of fully qualified domain names (FQDNs) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
You need not edit the following values:
physical—Comma-separated list of hostnames of the CSO nodes or servers are displayed.
virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers are displayed.
- Specify the following configuration values for each ESXi
host on the CSO node or server.
management_address—IP address of the Ethernet management (primary) interface in Classless Interdomain Routing (CIDR) notation of the VM network. For example, 192.0.2.0/24.
gateway—Gateway IP address of the VM network.
dns_search—Domain for DNS operations.
dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network.
hostname—Hostname of the VMware ESXi host.
username—Username for logging in to the VMware ESXi host.
password—Password for logging in to the VMware ESXi host.
vmnetwork—Label for each virtual network adapter. This label is used to identify the physical network that is associated to a virtual network adapter.
The vmnetwork data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify vmnetwork data within double quotation marks..
datastore—Datastore value to save all VM files.
The datastore data for each VM is available in the Summary tab of a VM in the vSphere Client. You must not specify datastore data within double quotes.
- Save the
provision_vm.conf
file. - Run the
provision_vm_ESXI.sh
script to create the VMs.root@host:~/Contrail_Service_Orchestration_4.1.0/# ./provision_vm_ESXI.sh
- Copy the
provision_vm.conf
file in the/confs
directory and name it provision_vm.conf.For example:
root@host:~/Contrail_Service_Orchestration_4.1.0/#scp confs/provision_vm.conf root@installer_VM_IP:/root/Contrail_Service_Orchestration_4.1.0/confs
This action of provisioning VMs using the Provisioning Tool brings up VMware ESXi VMs with the configuration provided in the files.
Manually Provisioning VRR VMs on the Contrail Service Orchestration Node or Server
To manually provision the VRR VM:
- Download the VRR Release 15.1R6.7 software package (
.ova
format) for VMware from the Virtual Route Reflector page, to a location accessible to the server. - Launch the VRR by using vSphere or vCenter Client for your ESXi server and log in to the server with your credentials.
- Set up an SSH session to the VRR VM.
- Execute the following commands:
root@host:~/# configure
root@host:~/# delete groups global system services ssh root-login deny-password
root@host:~/# set system root-authentication plain-text-password
root@host:~/# New Password:<password>
root@host:~/# Retype New Password:<password>
root@host:~/# set system services ssh
root@host:~/# set system services netconf ssh
root@host:~/# set routing-options rib inet.3 static route 0.0.0.0/0 discard
root@host:~/# commit
root@host:~/# exit
Verifying Connectivity of the VMs
From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.
If the VMs cannot communicate with all the other hosts in the deployment, the installation will fail.