Provisioning VMs on Contrail Service Orchestration Nodes or Servers
Virtual Machines (VMs) on the central and regional Contrail Service Orchestration nodes or servers host the infrastructure services and some other components. All servers and VMs for the solution should be in the same subnet. To set up the VMs, you can:
- Use the provisioning tool to create and configure the
VMs if you use the KVM hypervisor on a Contrail Service Orchestration
node or server.
The tool also installs Ubuntu in the VMs.
- Create and configure the VMs manually if you use a supported hypervisor other than KVM on the Contrail Service Orchestration node or server.
- Manually configure VMs that you already created on a Contrail Service Orchestration node or server.
The VMs required on a Contrail Service Orchestration node or server depend on whether you configure:
- A demo environment, which does not offer HA. See Table 1.
- A production environment without high availability (HA). See Table 2.
- A trial HA environment. See Table 3.
- A production environment with HA. See Table 4.
Table 1 shows complete details about the VMs for a demo environment.
Table 1: Details of VMs for a Demo Environment
Name of VM | Components That Installer Places in VM | Resources Required | Ports to Open |
---|---|---|---|
csp-installer-vm | — |
| See Table 5. |
csp-central-infravm | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-msvm | All microservices, including GUI applications |
| See Table 5. |
csp-regional-infravm | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-msvm | All microservices, including GUI applications |
| See Table 5. |
csp-space-vm | Junos Space Virtual Appliance and database—required only if you deploy virtualized network functions (VNFs) that use this EMS |
| See Table 5. |
csp-contrail-analytics-vm | For a distributed deployment, you install Contrail on this VM to make use of Contrail Analytics. For a centralized deployment, you can use the Contrail OpenStack in the Contrail Cloud Platform. |
| See Table 5. |
Table 2 shows complete details about the VMs required for a production environment without HA.
Table 2: Details of VMs for a Production Environment Without HA
Name of VM or Microservice Collection | Components That Installer Places in VM | Resources Required | Ports to Open |
---|---|---|---|
csp-installer-vm | — |
| See Table 5. |
csp-central-infravm | Third -party applications used as infrastructure services |
| See Table 5. |
csp-central-msvm | All microservices, including GUI applications |
| See Table 5. |
csp-regional-infravm | Third -party applications used as infrastructure services |
| See Table 5. |
csp-regional-msvm | All microservices, including GUI applications |
| See Table 5. |
csp-space-vm | Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS |
| See Table 5. |
csp-contrail-analytics-vm | For a distributed deployment, you install Contrail OpenStack on this VM. For a centralized deployment, you can use Contrail OpenStack in the Contrail Cloud Platform for Contrail Analytics functionality. |
| See Table 5. |
csp-central-elkvm | Logging applications |
| See Table 5. |
csp-regional-elkvm | Logging applications |
| See Table 5. |
Table 3 shows complete details about the VMs for a trial HA environment.
Table 3: Details of VMs for a Trial HA Environment
Name of VM or Microservice Collection | Components That Installer Places in VM | Resources Required | Ports to Open |
---|---|---|---|
csp-installer-vm | — |
| See Table 5. |
csp-central-infravm1 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-infravm2 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-infravm3 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-lbvm1 | Load-balancing applications |
| See Table 5. |
csp-central-lbvm2 | Load-balancing applications |
| See Table 5. |
csp-central-lbvm3 | Load-balancing applications |
| See Table 5. |
csp-central-msvm1 | All microservices, including GUI applications |
| See Table 5. |
csp-central-msvm2 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-infravm1 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-infravm2 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-infravm3 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-msvm1 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-msvm2 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-lbvm1 | Load-balancing applications |
| See Table 5. |
csp-regional-lbvm2 | Load-balancing applications |
| See Table 5. |
csp-regional-lbvm3 | Load-balancing applications |
| See Table 5. |
csp-space-vm | Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS |
| See Table 5. |
csp-contrail-analytics-vm | For a distributed deployment, the administrators install Contrail Service Orchestration ( contrail_analytics) on this VM. For a centralized or combined deployment, you can use the Contrail OpenStack in the Contrail Cloud Platform. |
| See Table 5. |
Table 4 shows complete details about the VMs for a production environment with HA.
Table 4: Details of VMs for a Production Environment with HA
Name of VM or Microservice Collection | Components That Installer Places in VM | Resources Required | Ports to Open |
---|---|---|---|
csp-installer-vm | — |
| See Table 5. |
csp-central-infravm1 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-infravm2 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-infravm3 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-central-lbvm1 | Load-balancing applications |
| See Table 5. |
csp-central-lbvm2 | Load-balancing applications |
| See Table 5. |
csp-central-lbvm3 | Load-balancing applications |
| See Table 5. |
csp-central-msvm1 | All microservices, including GUI applications |
| See Table 5. |
csp-central-msvm2 | All microservices, including GUI applications |
| See Table 5. |
csp-central-msvm3 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-infravm1 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-infravm2 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-infravm3 | Third-party applications used as infrastructure services |
| See Table 5. |
csp-regional-msvm1 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-msvm2 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-msvm3 | All microservices, including GUI applications |
| See Table 5. |
csp-regional-lbvm1 | Load-balancing applications |
| See Table 5. |
csp-regional-lbvm2 | Load-balancing applications |
| See Table 5. |
csp-regional-lbvm3 | Load-balancing applications |
| See Table 5. |
csp-space-vm | Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS |
| See Table 5. |
csp-contrail-analytics-vm | For a distributed deployment, the administrators install Contrail Analytics on this VM. For a centralized or combined deployment, you can use the Contrail OpenStack in the Contrail Cloud Platform. |
| See Table 5. |
csp-central-elkvm1 | Logging applications |
| See Table 5. |
csp-central-elkvm2 | Logging applications |
| See Table 5. |
csp-central-elkvm3 | Logging applications |
| See Table 5. |
csp-regional-elkvm1 | Logging applications |
| See Table 5. |
csp-regional-elkvm2 | Logging applications |
| See Table 5. |
csp-regional-elkvm3 | Logging applications |
| See Table 5. |
Table 5 lists the ports that you need to open for each VM.
Table 5: Ports to Open on VMs in the Cloud CPE Solution
22 | 2379 | 5000 | 5671 | 7070 | 8085 | 9101 | 9210 | 35357 |
80 through 84 | 2380 | 5044 | 5672 | 7804 | 8086 | 9102 | 10000 | — |
91 | 2888 | 5543 | 6000 | 8006 | 8090 | 9104 | 10248 | — |
443 | 3000 | 5601 | 6001 | 8016 | 8091 | 9108 | 10255 | — |
1414 | 3306 | 5664 | 6002 | 8080 | 9042 | 9141 | 15100 | — |
1947 | 3888 | 5665 | 6379 | 8082 | 9090 | 9160 | 15672 | — |
2181 | 4001 | 5666 | 6543 | 8083 | 9091 | 9200 | 30000 through 32767 | — |
The following sections describe the procedures for provisioning the VMs:
- Before You Begin
- Creating a Bridge Interface to Enable VMs to Communicate With the Network
- Creating a Data Interface for a Distributed Deployment
- Downloading the Installer
- Customizing the Configuration File for the Provisioning Tool
- Provisioning VMs with the Provisioning Tool
- Manually Provisioning VMs on the Contrail Service Orchestration Node or Server
- Verifying Connectivity of the VMs
- Copying the Installer Package to the Installer VM
Before You Begin
Before you begin you must:
- Configure the physical servers or node servers and nodes.
- For a centralized deployment, configure the Contrail Cloud Platform and install Contrail OpenStack.
Creating a Bridge Interface to Enable VMs to Communicate With the Network
If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each Contrail Service Orchestration node or server to a virtual interface. This action enables the VMs to communicate with the network.
To create the bridge interface:
- On the Contrail Service Orchestration node or server, log in as root.
- Update the index files of the software packages installed
on the server to reference the latest versions.
root@host:~/# apt-get update
- View the network interfaces configured on the server to
obtain the name of the primary interface on the server.
root@host:~/# ifconfig
- Install the libvirt software.
root@host:~/# apt-get install libvirt-bin
- View the list of network interfaces, which now includes
the virtual interface virbr0.
root@host:~/# ifconfig
- Open the file
/etc/network/interfaces
and modify it to map the primary network interface to the virtual interface virbr0.For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net
- Modify the default virtual network by customizing the
file
default.xml
:- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
/etc/network/interfaces
- Turn off the Spanning Tree Protocol (STP) option.
- Remove the NAT and DHCP configurations.
For example:
root@host:~/# virsh net-edit default
Before modification:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <ip address='192.168.1.2' netmask='255.255.255.0'> <dhcp> <range start='192.168.1.1' end='192.168.1.254'/> </dhcp> </ip> </network>
After modification:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr0' stp='off' delay='0'/> <ip address='192.168.1.2' netmask='255.255.255.0'> </ip> </network>
- Customize the IP address and subnet mask to match the
values for the virbr0 interface in the file
- Reboot the node and log in as root again.
- Verify that the primary network interface is mapped to
the virbr0 interface.
root@host:~/# brct1 show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2
Creating a Data Interface for a Distributed Deployment
For a distributed deployment, you create a second bridge interface that the VMs use to send data communications to the CPE device.
To create a data interface:
- Log into the server as root.
- Configure the new virtual interface and map it to a physical
interface.
For example:
root@host:~/# virsh brctl addbr ex: virbr1
root@host:~/# virsh brctl addif virbr1 eth1
- Create an xml file with the name
virbr1.xml
in the directory/var/lib/libvirt/network
. - Paste the following content into the
virbr1.xml
file , and edit file to match the actual settings for your interface.For example:
<network> <name>default</name> <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid> <bridge name='virbr1' stp='off' delay='0'/> <ip address='192.0.2.1' netmask='255.255.255.0'> </ip>
- Open the
/etc/network/interfaces
file and add the details for the second interface.For example:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig eth0 0.0.0.0 up auto eth1 iface eth1 inet manual up ifconfig eth1 0.0.0.0 up auto virbr0 iface virbr0 inet static bridge_ports eth0 address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search example.net auto virbr1 iface virbr1 inet static bridge_ports eth1
- Reboot the server.
- Verify that the secondary network interface, eth1, is
mapped to the virbr1 interface.
root@host:~/# brct1 show
bridge name bridge id STP enabled interfaces virbr0 8000.0cc47a010808 no em1 vnet1 vnet2 virbr1 8000.0cc47a010809 no em2 vnet0
- Configure the IP address for the interface.
You do not specify an IP address for the data interface when you create it.
Downloading the Installer
To download third-party software and deploy the installer:
- Log in as root to the central server.
The current directory is the home directory.
- Copy the installer package to the home directory.
- Use the Contrail Service Orchestration installer if you
purchased licenses for a centralized deployment or both Network Service
Orchestrator and Network Service Controller licenses for a distributed
deployment.
This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).
- Use the Network Service Controller installer if you purchased
only Network Service Controller licenses for a distributed deployment.
This option includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools and Customer Portal.
- Use the Contrail Service Orchestration installer if you
purchased licenses for a centralized deployment or both Network Service
Orchestrator and Network Service Controller licenses for a distributed
deployment.
- Expand the installer package, which has a name specific
to the release. For example, if the name of the installer package
is
cspVersion.tar.gz
:root@host:~/# tar –xvf cspVersion.tar.gz
The expanded package is a directory that has the same name as the installer package and contains the installation files.
Customizing the Configuration File for the Provisioning Tool
The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.
To customize the configuration file:
- Log in as root to the host on which you deployed the installer.
- Access the confs directory
that contains the example configuration files. For example, if the
name of the installer directory is
cspVersion
root@host:~/# cd cspVersion/confs
- Access the directory for the environment that you want
to configure.
Table 6 shows the directories that contain the files and the names of the example configuration files.
Table 6: Location of E Configuration Files for Provisioning VMs
Environment
Directory for Example Configuration File
Demo environment (without HA)
cso/demo/nonha/provisionvm
Production environment without HA
cso/production/nonha/provisionvm
Trial HA environment
cso/production/trial/ha/provisionvm
Production environment with HA
cso/production/ha/provisionvm
- Make a copy of the example configuration file in the directory
and name it
provision_vm.conf
.For example:
root@host:~/cspVersion/confs/cso/demo/nonha/provisionvm# cp provision_example.conf provision_vm.conf
- Open the file
provision_vm.conf
with a text editor. - In the [TARGETS] section, specify
the following values for the network on which the Cloud CPE solution
resides.
- installer_ip—IP address of the management interface of the host on which you deployed the installer.
- ntp_servers—Comma-separated list of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
- physical—Comma-separated list of hostnames of the Contrail Service Orchestration nodes or servers.
- virtual—Comma-separated list of names of the virtual machines (VMs) on the Contrail Service Orchestration servers.
- Specify the following configuration values for each Contrail
Service Orchestration node or server that you specified in Step 6.
- [hostname]—Hostname of the Contrail Service Orchestration node or server
- management_address—IP address of the Ethernet management (primary) interface
- management_interface—Name of the Ethernet management interface, virbr0
- gateway—IP address of the gateway for the host
- dns_search—Domain for DNS operations
- dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network
- hostname—Hostname of the node
- username—Username for logging in to the node
- password—Password for logging in to the node
- data_interface—Name of the data interface. Leave blank for a centralized deployment and specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.
- Except for the Junos Space Virtual Appliance and Junos
Space database VMs, specify configuration values for each VM that
you specified in Step 6.
- [VM name]—Name of the VM
- management_address—IP address of the Ethernet management interface
- hostname—Fully qualified domain name (FQDN) of the VM
- username—Login name of user who can manage all VMs
- password—Password for user who can manage all VMs
- local_user—Login name of user who can manage this VM
- local_password—Password for user who can manage this VM
- guest_os—Name of the operating system
- host_server—Hostname of the Contrail Service Orchestration node or server
- memory—Required amount of RAM in GB
- vCPU—Required number of virtual central processing units (vCPUs)
- enable_data_interface—True enables the VM to transmit data. and false prevents the VM from transmitting data. The default is true.
- For the Junos Space Virtual Appliance and Junos Space
database VMs, specify configuration values for each VM that you specified
in Step 6.
- [VM name]—Name of the VM.
- management_address—IP address of the Ethernet management interface.
- web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)
- gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the Contrail Service Orchestration node or server that hosts the VM.
- nameserver_address—IP address of the DNS nameserver.
- hostname—FQDN of the VM.
- username—Username for logging in to Junos Space.
- password—Default password for logging in to Junos Space.
- newpassword—Password that you provide when you configure the Junos Space appliance.
- guest_os—Name of the operating system.
- host_server—Hostname of the Contrail Service Orchestration node or server.
- memory—Required amount of RAM in GB.
- vCPU—Required number of virtual central processing units (vCPUs).
- spacedb—(Only for Junos Space database VMs) true.
- In the [MYSQL] section, specify the following configuration
settings:
- remote_user—Username for logging in to the Junos Space database
- remote_password—Password for logging in to the Junos Space database
- Save the file.
- Run the following command to start virtual machines.
root@host:~/# ./provision_vm.sh
The following examples show customized configuration files for the different deployments:
- Demo environment (see Sample Configuration File for Provisioning VMs in a Demo Environment).
- Production environment without HA (see Sample Configuration File for Provisioning VMs in a Production Environment Without HA).
- Trial HA environment (see Sample Configuration File for Provisioning VMs in a Trial HA Environment).
- Production environment with HA (see Sample Configuration File for Provisioning VMs in a Production Environment with HA).
Sample Configuration File for Provisioning VMs in a Demo Environment
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host, cso-regional-host # The list of virtual servers to be provisioned. virtual = csp-central-infravm, csp-central-msvm, csp-regional-infravm, csp-regional-msvm, csp-space-vm, csp-installer-vm, csp-contrailanalytics-vm # Physical Server Details [cso-central-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host username = root password = passw0rd data_interface = [cso-regional-host] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host username = root password = passw0rd data_interface = # VM Details [csp-central-infravm] management_address = 192.168.1.4/24 hostname = centralinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 16384 vcpu = 4 enable_data_interface = true [csp-central-msvm] management_address = 192.168.1.5/24 hostname = centralmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 16384 vcpu = 4 enable_data_interface = true [csp-regional-infravm] management_address = 192.168.1.6/24 hostname = regionalinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 16384 vcpu = 4 enable_data_interface = true [csp-regional-msvm] management_address = 192.168.1.7/24 hostname = regionalmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 16384 vcpu = 4 enable_data_interface = true [csp-space-vm] management_address = 192.168.1.8/24 web_address = 192.168.1.9/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-regional-host memory = 16384 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.10/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 16384 vcpu = 4 enable_data_interface = true [csp-contrailanalytics-vm] management_address = 192.168.1.11/24 hostname = canvm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = canubuntu host_server = cso-central-host memory = 16384 vcpu = 4 enable_data_interface = true
Sample Configuration File for Provisioning VMs in a Production Environment Without HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host, cso-regional-host # Note: Central and Regional physical servers are used as "csp-central-ms" and "csp-regional-ms" servers. # The list of virtual servers to be provisioned. virtual = csp-central-infravm, csp-regional-infravm, csp-installer-vm, csp-space-vm, csp-contrailanalytics-vm, csp-central-elkvm, csp-regional-elkvm, csp-central-msvm, csp-regional-msvm # Physical Server Details [cso-central-host] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host username = root password = passw0rd data_interface = [cso-regional-host] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host username = root password = passw0rd data_interface = # VM Details [csp-central-infravm] management_address = 192.168.1.4/24 hostname = centralinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-infravm] management_address = 192.168.1.5/24 hostname = regionalinfravm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 65536 vcpu = 16 enable_data_interface = true [csp-space-vm] management_address = 192.168.1.6/24 web_address = 192.168.1.7/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-regional-host memory = 32768 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.8/24 hostname = installer.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 32768 vcpu = 4 enable_data_interface = true [csp-contrailanalytics-vm] management_address = 192.168.1.9/24 hostname = canvm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = canubuntu host_server = cso-central-host memory = 32768 vcpu = 8 enable_data_interface = true [csp-central-elkvm] management_address = 192.168.1.10/24 hostname = centralelkvm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-elkvm] management_address = 192.168.1.11/24 hostname = regionalelkvm.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-msvm] management_address = 192.168.1.12/24 hostname = centralmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-msvm] management_address = 192.168.1.13/24 hostname = regionalmsvm.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host memory = 65536 vcpu = 16 enable_data_interface = true
Sample Configuration File for Provisioning VMs in a Trial HA Environment
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-host1, cso-host2, cso-host3 # The list of virtual servers to be provisioned. virtual = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-central-msvm1, csp-central-msvm2, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-regional-msvm1, csp-regional-msvm2, csp-contrailanalytics-vm, csp-central-lb-vm1, csp-central-lb-vm2, csp-central-lb-vm3, csp-regional-lb-vm1, csp-regional-lb-vm2, csp-regional-lb-vm3, csp-space-vm, csp-installer-vm # Physical Server Details [cso-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host1 username = root password = passw0rd data_interface = [cso-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host2 username = root password = passw0rd data_interface = [cso-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-host3 username = root password = passw0rd data_interface = # VM Details [csp-central-infravm1] management_address = 192.168.1.5/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 49152 vcpu = 6 enable_data_interface = true [csp-central-infravm2] management_address = 192.168.1.6/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 6 enable_data_interface = true [csp-central-infravm3] management_address = 192.168.1.7/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 49152 vcpu = 6 enable_data_interface = true [csp-central-msvm1] management_address = 192.168.1.8/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 49152 vcpu = 6 enable_data_interface = true [csp-central-msvm2] management_address = 192.168.1.9/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 6 enable_data_interface = true [csp-regional-infravm1] management_address = 192.168.1.10/24 hostname = regionalinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 49152 vcpu = 6 enable_data_interface = true [csp-regional-infravm2] management_address = 192.168.1.11/24 hostname = regionalinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 6 enable_data_interface = true [csp-regional-infravm3] management_address = 192.168.1.12/24 hostname = regionalinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 49152 vcpu = 6 enable_data_interface = true [csp-regional-msvm1] management_address = 192.168.1.13/24 hostname = regionalmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 49152 vcpu = 6 enable_data_interface = true [csp-regional-msvm2] management_address = 192.168.1.14/24 hostname = regionalmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 49152 vcpu = 6 enable_data_interface = true [csp-space-vm] management_address = 192.168.1.15/24 web_address = 192.168.1.16/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-host3 memory = 16384 vcpu = 6 [csp-installer-vm] management_address = 192.168.1.17/24 hostname = installervm.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 16384 vcpu = 6 enable_data_interface = true [csp-contrailanalytics-vm] management_address = 192.168.1.18/24 hostname = canvm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = canubuntu host_server = cso-host3 memory = 49152 vcpu = 6 enable_data_interface = true [csp-central-lb-vm1] management_address = 192.168.1.19/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 enable_data_interface = true [csp-central-lb-vm2] management_address = 192.168.1.20/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host2 memory = 16384 vcpu = 4 enable_data_interface = true [csp-central-lb-vm3] management_address = 192.168.1.21/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 16384 vcpu = 4 enable_data_interface = true [csp-regional-lb-vm1] management_address = 192.168.1.22/24 hostname = regionallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 enable_data_interface = true [csp-regional-lb-vm2] management_address = 192.168.1.23/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host1 memory = 16384 vcpu = 4 enable_data_interface = true [csp-regional-lb-vm3] management_address = 192.168.1.24/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-host3 memory = 16384 vcpu = 4 enable_data_interface = true
Sample Configuration File for Provisioning VMs in a Production Environment with HA
# This config file is used to provision KVM-based virtual machines using lib virt manager. [TARGETS] # Mention primary host (installer host) management_ip installer_ip = ntp_servers = ntp.juniper.net # The physical server where the Virtual Machines should be provisioned # There can be one or more physical servers on # which virtual machines can be provisioned physical = cso-central-host1, cso-central-host2, cso-central-host3, cso-regional-host1, cso-regional-host2, cso-regional-host3 # Note: Central and Regional physical servers are used as "csp-central-ms1", "csp-central-ms2", "csp-central-ms3" and "csp-regional-ms1", "csp-regional-ms2", "csp-regional-ms3" servers. # The list of virtual servers to be provisioned. virtual = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1, csp-regional-lbvm2, csp-regional-lbvm3, csp-space-vm, csp-installer-vm, csp-contrailanalytics-vm, csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-regional-elkvm1, csp-regional-elkvm2, csp-regional-elkvm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3 # Physical Server Details [cso-central-host1] management_address = 192.168.1.2/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host1 username = root password = passw0rd data_interface = [cso-central-host2] management_address = 192.168.1.3/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host2 username = root password = passw0rd data_interface = [cso-central-host3] management_address = 192.168.1.4/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-central-host3 username = root password = passw0rd data_interface = [cso-regional-host1] management_address = 192.168.1.5/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host1 username = root password = passw0rd data_interface = [cso-regional-host2] management_address = 192.168.1.6/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host2 username = root password = passw0rd data_interface = [cso-regional-host3] management_address = 192.168.1.7/24 management_interface = virbr0 gateway = 192.168.1.1 dns_search = example.net dns_servers = 192.168.10.1 hostname = cso-regional-host3 username = root password = passw0rd data_interface = # VM Details [csp-central-infravm1] management_address = 192.168.1.8/24 hostname = centralinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 65536 vcpu = 16 enable_data_interface = true [csp-central-infravm2] management_address = 192.168.1.9/24 hostname = centralinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 65536 vcpu = 16 enable_data_interface = true [csp-central-infravm3] management_address = 192.168.1.10/24 hostname = centralinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-infravm1] management_address = 192.168.1.11/24 hostname = regionalinfravm1.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-infravm2] management_address = 192.168.1.12/24 hostname = regionalinfravm2.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-infravm3] management_address = 192.168.1.13/24 hostname = regionalinfravm3.example.net username = root password = passw0rd local_user = infravm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 65536 vcpu = 16 enable_data_interface = true [csp-space-vm] management_address = 192.168.1.14/24 web_address = 192.168.1.13/24 gateway = 192.168.1.1 nameserver_address = 192.168.1.254 hostname = spacevm.example.net username = admin password = abc123 newpassword = jnpr123! guest_os = space host_server = cso-regional-host2 memory = 32768 vcpu = 4 [csp-installer-vm] management_address = 192.168.1.15/24 hostname = installervm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-contrailanalytics-vm] management_address = 192.168.1.16/24 hostname = canvm.example.net username = root password = passw0rd local_user = installervm local_password = passw0rd guest_os = canubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 8 [csp-central-lbvm1] management_address = 192.168.1.17/24 hostname = centrallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-lbvm2] management_address = 192.168.1.18/24 hostname = centrallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-lbvm3] management_address = 192.168.1.19/24 hostname = centrallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-lbvm1] management_address = 192.168.1.20/24 hostname = regionallbvm1.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-lbvm2] management_address = 192.168.1.21/24 hostname = regionallbvm2.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-lbvm3] management_address = 192.168.1.22/24 hostname = regionallbvm3.example.net username = root password = passw0rd local_user = lbvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-elkvm1] management_address = 192.168.1.23/24 hostname = centralelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-elkvm2] management_address = 192.168.1.24/24 hostname = centralelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-elkvm3] management_address = 192.168.1.25/24 hostname = centralelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-elkvm1] management_address = 192.168.1.26/24 hostname = regionalelkvm1.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-elkvm2] management_address = 192.168.1.27/24 hostname = regionalelkvm2.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 32768 vcpu = 4 enable_data_interface = true [csp-regional-elkvm3] management_address = 192.168.1.28/24 hostname = regionalelkvm3.example.net username = root password = passw0rd local_user = elkvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 32768 vcpu = 4 enable_data_interface = true [csp-central-msvm1] management_address = 192.168.1.29/24 hostname = centralmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host1 memory = 65536 vcpu = 16 enable_data_interface = true [csp-central-msvm2] management_address = 192.168.1.30/24 hostname = centralmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host2 memory = 65536 vcpu = 16 enable_data_interface = true [csp-central-msvm3] management_address = 192.168.1.31/24 hostname = centralmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-central-host3 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-msvm1] management_address = 192.168.1.32/24 hostname = regionalmsvm1.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host1 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-msvm2] management_address = 192.168.1.33/24 hostname = regionalmsvm2.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host2 memory = 65536 vcpu = 16 enable_data_interface = true [csp-regional-msvm3] management_address = 192.168.1.34/24 hostname = regionalmsvm3.example.net username = root password = passw0rd local_user = msvm local_password = passw0rd guest_os = ubuntu host_server = cso-regional-host3 memory = 65536 vcpu = 16 enable_data_interface = true
Provisioning VMs with the Provisioning Tool
If you use the KVM hypervisor on the server that supports the Contrail Service Orchestration node or server, you can use the provisioning tool to:
- Create and configure the VMs for the Contrail Service Orchestration and Junos Space components.
- Install the operating system in the VMs:
- Ubuntu in the Contrail Service Orchestration VMs
- Junos Space Network Management Platform software in the Junos Space VMs
![]() | Note: If you use another supported hypervisor or already created VMs that you want to use, provision the VMs manually. |
To provision VMs with the provisioning tool:
- Log in as root to the host on which you deployed the installer.
- Access the directory for the installer. For example, if
the name of the installer directory is
cspVersion
:root@host:~/# cd /~/cspVersion/
- Run the provisioning tool.
root@host:~/cspVersion/# ./provision_vm.sh
The provisioning begins.
- During installation, observe detailed messages in the
log files about the provisioning of the VMs.
provision_vm.log
—Contains details about the provisioning processprovision_vm_console.log
—Contains details about the VMsprovision_vm_error.log
—Contains details about errors that occur during provisioning
For example:
root@host:~/cspVersion/# cd logs
root@host:/cspVersion/logs/# tailf provision_vm.log
Manually Provisioning VMs on the Contrail Service Orchestration Node or Server
To manually provision VMs on each Contrail Service Orchestration node or server:
- Download and configure the specified Ubuntu images on
your servers.
See Software Tested for the COTS Nodes and Serversfor the required operating system for each type VM. You may need to install multiple operating systems.
- Copy the required Ubuntu images from the Ubuntu website to separate directories on your server.
- Create an Ubuntu Cloud virtual machine disk (VMDK) for
each of the images that you downloaded.
For example:
root@host:~/# cd ubuntu-version
root@host:~/# qemu-img convert -O vmdk ubuntu-14.04-server-cloudimg-amd64-disk1.img ubuntu-14.04-server-cloudimg-amd64-disk1.vmdk
- Specify the default password for Ubuntu by creating a
text file called
user-data.txt
with the following content in each of the Ubuntu directories.#cloud-config password: ubuntu
- Specify the default local host for Ubuntu by creating
a text file called
meta-data.txt
with the following content in each the Ubuntu directories.local-hostname: localhost
- Create a file called
seed.iso
that contains the default password and host.root@host:~/# genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
- Create the VMs manually using the appropriate image for the type of VM. See Software Tested for the COTS Nodes and Serversfor the required operating system for each VM.
- On each Contrail Service Orchestration node or server,
create VMs or reconfigure existing VMs:
- If you use a demo environment, create the VMs with the resources listed in Table 1.
- If you use a production environment without HA, create the VMs with the resources listed in Table 2.
- If you use a trial HA environment, create the VMs with the resources listed in Table 3.
- If you use a production environment with HA, create the VMs with the resources listed in Table 4.
- Configure hostnames and specify IP addresses for the Ethernet Management interfaces on each VM.
- Configure read, write, and execute permissions for the users of the VMs, so that the installer can access the VMs when you deploy the Cloud CPE solution.
- Configure DNS and Internet access for the VMs.
- If MySQL software is installed in the VMs for Service
and infrastructure Monitor, remove it.
When you install the Cloud CPE solution, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly.
- Install OpenSSH on the VMs.
- Issue the following commands to install the OpenSSH server
and client tools.
root@host:~/# apt-get install openssh-server
root@host:~/# apt-get install openssh-client
- Set the PermitRootLogin value in the
/etc/ssh/sshd_config
file to Yes.This action enables root login through Secure Shell (SSH).
- Issue the following commands to install the OpenSSH server
and client tools.
Verifying Connectivity of the VMs
From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the Cloud CPE solution.
![]() | Caution: If the VMs cannot communicate with all the other hosts in the deployment, the installation can fail. |
Copying the Installer Package to the Installer VM
After you have provisioned the VMs, move the uncompressed installer package from the central server to the installer VM.