Provisioning VMs on Contrail Service Orchestration Nodes
VMs on a Contrail Service Orchestration node host the Contrail Service Orchestration and Junos Space components. You can:
- Use the provisioning tool to create and configure the
VMs if you use the KVM hypervisor on a Contrail Service Orchestration
node.
The tool also installs Ubuntu in the VMs.
- Create and configure the VMs manually if you use a supported hypervisor other than KVM on the on the Contrail Service Orchestration node.
- Manually configure VMs that you already created on a Contrail Service Orchestration node.
The VMs required on a Contrail Service Orchestration node depend on whether you configure Contrail Service Orchestration redundancy. In a non-redundant configuration, you configure one Contrail Service Orchestration node (called cso-host1 in this documentation) on server 1. In a redundant Contrail Service Orchestration configuration, you configure one Contrail Service Orchestration node on server 1 and one Contrail Service Orchestration node (called cso-host2 in this documentation) on server 2.
Table 1 shows complete details about the VMs required on the Contrail Service Orchestration node for a non-redundant Contrail Service Orchestration configuration.
Table 1: Details of VMs on the Contrail Service Orchestration Node for a Non-Redundant Configuration
Name of VM | Components That Installer Places in VM | Resources Required | Ports to Open |
---|---|---|---|
csp-ui-vm |
|
| Note: Open these ports for all VMs on the Contrail Service Orchestration node.
|
csp-ms-vm |
|
| Open the ports listed for the |
csp-elk-vm |
|
| Open the ports listed for the |
csp-infra1-vm |
|
| Open the ports listed for the |
csp-infra2-vm |
|
| Open the ports listed for the |
csp-simcontroller-vm |
|
| Open the ports listed for the |
csp-sim-vm | Service and Infrastructure Monitor |
Caution: Make sure that MySQL software is not installed in the VM for Service and Infrastructure Monitor. When you install the Cloud CPE Centralized Deployment Model, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly. | Open the ports listed for the |
csp-space-vm | Junos Space Virtual Appliance and database |
| Open the ports listed for the |
Table 2 shows complete details about the VMs required on the Contrail Service Orchestration nodes on server 1 and server 2 for a redundant Contrail Service Orchestration configuration.
Table 2: Details of VMs on Contrail Service Orchestration Nodes for a Redundant Configuration
Components That Installer Places in VM | Distribution of VMs | Resources Required for Each VM | Ports to Open |
---|---|---|---|
| 1 per Contrail Service Orchestration node
|
| Note: Open these ports for all VMs on the Contrail Service Orchestration node.
|
| 1 per Contrail Service Orchestration node:
|
| Open the ports listed for the |
|
|
| Open the ports listed for the |
|
|
| Open the ports listed for the |
Service and Infrastructure Monitor | 1 on cso-host1:
|
Caution: Make sure that MySQL software is not installed in the VM for Service and Infrastructure Monitor. When you install the Cloud CPE Centralized Deployment Model, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly. | Open the ports listed for the |
Load Balancer | csp-ha-vm on cso-host-1 |
| Open the ports listed for the |
| 1 per Contrail Service Orchestration node:
|
| Open the ports listed for the |
| 1 per Contrail Service Orchestration node:
|
| Open the ports listed for the |
Junos Space Virtual Appliance | 1 per Contrail Service Orchestration node:
|
| Open the ports listed for the |
Junos Space database | 1 per Contrail Service Orchestration node:
|
| Open the ports listed for the |
The following sections describe the procedures for provisioning the VMs:
Before You Begin
Before you being you must:
- Configure the servers and nodes in Contrail Cloud Reference Architecture (CCRA).
- Install Contrail OpenStack.
- Download third-party software and deploy the installer.
Creating a Bridge Interface to Support VMs
If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each Contrail Service Orchestration node to a virtual interface.
To create the bridge interface:
- On the Contrail Service Orchestration node, log in as root.
- Update the index files of the software packages installed
on the server to reference the latest versions.
root@host:~/# apt-get update
- View the network interfaces configured on the server to
obtain the name of the primary interface on the server.
root@host:~/# ifconfig
- Install the libvirt software.
root@host:~/# apt-get install libvirt-bin
- View the list of network interfaces, which now includes
the virtual interface virbr0.
root@host:~/# ifconfig
- Modify the file
/etc/network/interfaces
to map the primary network interface to the virtual interface virbr0.For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net
- Modify the default virtual network by customizing the
file
default.xml
:- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
/etc/network/interfaces
- Turn off the Spanning Tree Protocol (STP) option.
- Remove the NAT and DHCP configurations.
For example:
root@host:~/# virsh net-edit default
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr0' stp='off' delay='0'/> <ip address='192.168.1.2' netmask='255.255.255.0'> </ip> </network>
- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
- Reboot the node and log in as root again.
- Verify that the primary network interface is mapped to
the virbr0 interface.
root@host:~/# brct1 show
Customizing the Configuration File for the Provisioning Tool
The provisioning tool uses a configuration file, which you must
customize for your network. An example configuration file is located
in the /provision_vm
directory of
the installer directory and has the name provision_vm_example.conf
. The configuration file is in YAML format.
To customize the configuration file:
- Log in as root to the host on which you deployed the installer.
- Access the directory that contains the example configuration
file. For example, if the name of the installer directory is
cspVersion
:root@host:~/# cd cspVersion/provision_vm
- Make a copy of the example configuration file and name
it
provision_vm.conf
.root@host:~/cspVersion/provision_vm# cp provision_vm_example.conf provision_vm.conf
- Open the file
provision_vm.conf
with a text editor. - In the [TARGETS] section, specify
the following values for the network on which the Cloud CPE Centralized
Deployment Model resides.
- installer_ip—IP address of the management interface of the host on which you deployed the installer.
- ntp_servers—Comma-separated list of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
- physical—Comma-separated list of hostnames of the Contrail Service Orchestration nodes.
- virtual—Comma-separated list of names of the virtual machines on which you install Contrail Service Orchestration components.
- Specify the following configuration values for each Contrail
Service Orchestration node that you specified in Step 5.
- [hostname]—Hostname of the Contrail Service Orchestration node
- management_address—IP address of the Ethernet management (primary) interface
- management_interface—Name of the Ethernet management interface, virbr0
- gateway—IP address of the gateway for the host
- dns_search—Domain for DNS operations
- dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network
- hostname—Hostname of the node
- username—Username for logging in to the node
- password—Password for logging in to the node
- Except for the Junos Space Virtual Appliance and Junos
Space database VMs, specify configuration values for each VM that
you specified in Step 5.
- [VM name]—Name of the VM
- management_address—IP address of the Ethernet management interface
- hostname—Fully qualified domain name (FQDN) of the VM
- username—Login name of user who can manage all VMs
- password—Password for user who can manage all VMs
- local_user—Login name of user who can manage this VM
- local_password—Password name of user who can manage this VM
- guest_os—Name of the operating system
- host_server—Hostname of the Contrail Service Orchestration node
- memory—Required amount of RAM in GB
- vCPU—Required number of virtual central processing units (vCPUs)
- For the Junos Space Virtual Appliance and Junos Space
database VMs, specify configuration values for each VM that you specified
in Step 5.
- [VM name]—Name of the VM.
- management_address—IP address of the Ethernet management interface.
- web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)
- gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the Contrail Service Orchestration node that hosts the VM.
- nameserver_address—IP address of the DNS nameserver.
- hostname—FQDN of the VM.
- username—Username for logging in to Junos Space.
- password—Default password for logging in to Junos Space.
- newpassword—Password that you provide when you configure the Junos Space appliance.
- guest_os—Name of the operating system.
- host_server—Hostname of the Contrail Service Orchestration node.
- memory—Amount of RAM in GB required.
- vCPU—Number of virtual central processing units (vCPUs) required.
- spacedb—(Only for Junos Space database VMs) true.
- In the [MYSQL] section, specify the following configuration
settings:
- remote_user—Username for logging in to the Junos Space database
- remote_password—Password for logging in to the Junos Space database
- Save the file.
The following examples show customized configuration files for non-redundant and redundant Contrail Service Orchestration installations.
Sample Configuration File for Provisioning VMs in a Non-Redundant Contrail Service Orchestration Installation
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host # The list of virtual servers to be provisioned. virtual = csp-ms-vm, csp-infra1-vm, csp-infra2-vm, csp-ui-vm, csp-sim-vm, csp-simcontroller-vm, csp-elk-vm, csp-space-vm # Physical Server Details [cso-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host username = root password = passw0rd [csp-ms-vm] management_address = 192.168.1.3/24 hostname = ms-vm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 32768 vcpu = 8 [csp-infra1-vm] management_address = 192.168.1.4/24 hostname = infra1-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 32768 vcpu = 4 [csp-infra2-vm] management_address = 192.168.1.5/24 hostname = infra2-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 32768 vcpu = 4 [csp-ui-vm] management_address = 192.168.1.6/24 hostname = ui-vm.example.net username = root password = passw0rd local_user = uivm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 16384 vcpu = 4 [csp-sim-vm] management_address = 192.168.1.7/24 hostname = sim-vm.example.net username = root password = passw0rd local_user = simvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 16384 vcpu = 4 [csp-simcontroller-vm] management_address = 192.168.1.8/24 hostname = simcontroller-vm.example.net username = root password = passw0rd local_user = simcontrollervm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 32768 vcpu = 4 [csp-elk-vm] management_address = 192.168.1.9/24 hostname = elk-vm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host memory = 24576 vcpu = 4 # Space cluster details. Cluster has two web nodes and two db nodes [csp-space-vm] management_address = 192.168.1.10/24 web_address = 192.168.1.11/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = space-vm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host memory = 32768 vcpu = 4 # Desired mysql credentials for space [MYSQL] remote_user = cspadmin remote_password = passw0rd
Sample Configuration File for Provisioning VMs in a Redundant Contrail Service Orchestration Installation
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host1, cso-host2 # The list of virtual servers to be provisioned. virtual = csp-ui-vm1, csp-ui-vm2, csp-ms-vm1, csp-ms-vm2, csp-infra-vm1, csp-infra-vm2, csp-infra-vm3, csp-infra-vm4, csp-infra-vm5, csp-infra-vm6, csp-sim-vm1, csp-ha-vm, csp-elk-vm1, csp-elk-vm2, csp-space-vm1, csp-space-vm2, csp-mysql-vm1, csp-mysql-vm2, csp-simcontroller-vm1, csp-simcontroller-vm2 # Physical Server Details [cso-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host1 username = root password = passw0rd [cso-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host2 username = root password = passw0rd # VM Details [csp-ui-vm1] management_address = 192.168.1.4/24 hostname = ui1-vm.example.net username = root password = passw0rd local_user = uivm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 [csp-ui-vm2] management_address = 192.168.1.5/24 hostname = ui2-vm.example.net username = root password = passw0rd local_user = uivm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 16384 vcpu = 4 [csp-ms-vm1] management_address = 192.168.1.6/24 hostname = ms1-vm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 4 [csp-ms-vm2] management_address = 192.168.1.7/24 hostname = ms2-vm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 4 [csp-infra-vm1] management_address = 192.168.1.8/24 hostname = infra1-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 3 [csp-infra-vm2] management_address = 192.168.1.9/24 hostname = infra2-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 3 [csp-infra-vm3] management_address = 192.168.1.10/24 hostname = infra3-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 3 [csp-infra-vm4] management_address = 192.168.1.11/24 hostname = infra4-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 3 [csp-infra-vm5] management_address = 192.168.1.12/24 hostname = infra5-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 3 [csp-infra-vm6] management_address = 192.168.1.13/24 hostname = infra6-vm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 3 [csp-sim-vm1] management_address = 192.168.1.14/24 hostname = sim1-vm.example.net username = root password = passw0rd local_user = simvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 [csp-ha-vm] management_address = 192.168.1.15/24 hostname = ha-vm.example.net username = root password = passw0rd local_user = havm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 12288 vcpu = 4 [csp-elk-vm1] management_address = 192.168.1.16/24 hostname = elk1-vm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 12288 vcpu = 3 [csp-elk-vm2] management_address = 192.168.1.17/24 hostname = elk2-vm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 12288 vcpu = 3 [csp-simcontroller-vm1] management_address = 192.168.1.18/24 hostname = simcontroller1-vm.example.net username = root password = passw0rd local_user = simcontrollervm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 32768 vcpu = 4 [csp-simcontroller-vm2] management_address = 192.168.1.19/24 hostname = simcontroller2-vm.example.net username = root password = passw0rd local_user = simcontrollervm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 32768 vcpu = 4 # Space cluster details. Cluster has two web nodes and two db nodes [csp-space-vm1] management_address = 192.168.1.20/24 web_address = 192.168.1.21/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = space-vm1.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host1 memory = 12288 vcpu = 4 [csp-space-vm2] management_address = 192.168.1.22/24 gateway = 198.168.1.1 nameserver_address = 192.168.1.254 hostname = space-vm2.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host2 memory = 12288 vcpu = 4 [csp-mysql-vm1] management_address = 192.168.1.23/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = mysql-vm1.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host1 memory = 12288 vcpu = 4 spacedb = true [csp-mysql-vm2] management_address = 192.168.1.24/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = mysql-vm2.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host2 memory = 12288 vcpu = 4 spacedb = true # Desired mysql credentials for space [MYSQL] remote_user = cspadmin remote_password = passw0rd
Provisioning VMs with the Provisioning Tool
If you use the KVM hypervisor on the server that supports the Contrail Service Orchestration node, you can use the provisioning tool to:
- Create and configure the VMs for the Contrail Service Orchestration and Junos Space components.
- Install the operating system in the VMs:
- Ubuntu in the Contrail Service Orchestration VMs
- Junos Space Network Management Platform software in the Junos Space VMs
If you use another supported hypervisor or already created VMs that you want to use, provision the VMs manually.
- Log in as root to the host on which you deployed the installer.
- Access the directory for the installer. For example, if
the name of the installer directory is
cspVersion
:root@host:~/# cd /~/cspVersion/
- Run the provisioning tool.
root@host:~/cspVersion/# ./provision_vm.sh
The provisioning begins.
- During installation, observe detailed messages in the
log files about the provisioning of the VMs.
provision_vm.log
—Contains details about the provisioning processprovision_vm_console.log
—Contains details about the VMsprovision_vm_error.log
—Contains details about errors that occur during provisioning
For example:
root@host:~/cspVersion/# cd logs
root@host:/cspVersion/logs/# tailf provision_vm.log
Manually Provisioning VMs on the Contrail Service Orchestration Node
To manually provision VMs on each Contrail Service Orchestration node:
- On each Contrail Service Orchestration node, create VMs or reconfigure existing VMs:
- Configure hostnames and specify IP addresses for the Ethernet Management interfaces on each VM.
- Configure read, write, and execute permissions for the users of the VMs, so that the installer can access the VMs when you deploy the Cloud CPE Centralized Deployment Model.
- Configure DNS and Internet access for the VMs.
- If MySQL software is installed in the VMs for Service
and infrastructure Monitor, remove it.
When you install the Cloud CPE Centralized Deployment Model, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly.
Verifying Connectivity of the VMs
From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the Cloud CPE Centralized Deployment Model.
![]() | Caution: If the VMs cannot communicate with all the other hosts in the Cloud CPE Centralized Deployment Model, the installation can fail. |