Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Provisioning VMs on Contrail Service Orchestration Nodes or Servers

    Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO) nodes or servers host the infrastructure services and some other components. All servers and VMs for the solution should be in the same subnet. To set up the VMs, you can:

    • Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor on a CSO node or server.

      The tool also installs Ubuntu in the VMs.

    • Create and configure the VMs manually if you use a supported hypervisor other than KVM on the CSO node or server.

    • Manually configure VMs that you already created on a CSO node or server.

    The VMs required on a CSO node or server depend on whether you configure:

    • A demo environment without high availability (HA).

    • A production environment without high availability.

    • A demo environment with HA.

    • A production environment with HA.

    See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each environment.

    The following sections describe the procedures for provisioning the VMs:

    Before You Begin

    Before you begin you must:

    • Configure the physical servers or node servers and nodes.

    • The operating system for physical servers must be Ubuntu 14.04.5 LTS.

    • For a centralized deployment, configure the Contrail Cloud Platform and install Contrail OpenStack.

    Downloading the Installer

    To download the installer package:

    1. Log in as root to the central CSO node or server.

      The current directory is the home directory.

    2. Download the appropriate installer package from https://www.juniper.net/support/downloads/?p=cso#sw.
      • Use the Contrail Service Orchestration installer if you purchased licenses for a centralized deployment or both Network Service Orchestrator and Network Service Controller licenses for a distributed deployment.

        This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).

      • Use the Network Service Controller installer if you purchased only Network Service Controller licenses for a distributed deployment or SD-WAN implementation.

        This option includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools.

    3. Expand the installer package, which has a name specific to its contents and the release. For example, if the name of the installer package is csoVersion.tar.gz:
      root@host:~/# tar –xvzf csoVersion.tar.gz

      The expanded package is a directory that has the same name as the installer package and contains the installation files.

    Creating a Bridge Interface for KVM

    If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each CSO node or server to a virtual interface. This action enables the VMs to communicate with the network.

    To create the bridge interface:

    1. Log in as root on the central CSO node or server.
    2. Update the index files of the software packages installed on the server to reference the latest versions.
      root@host:~/# apt-get update
    3. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
      root@host:~/# ifconfig
    4. Install the libvirt software.
      root@host:~/# apt-get install libvirt-bin
    5. View the list of network interfaces, which now includes the virtual interface virbr0.
      root@host:~/# ifconfig

    6. Open the file /etc/network/interfaces and modify it to map the primary network interface to the virtual interface virbr0.

      For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:

      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto eth0
      iface eth0 inet manual
          up ifconfig eth0 0.0.0.0 up
      
      auto virbr0
      iface virbr0 inet static
               bridge_ports eth0
               address 192.168.1.2
               netmask 255.255.255.0
               network 192.168.1.0
               broadcast 192.168.1.255
               gateway 192.168.1.1
               dns-nameservers 8.8.8.8
               dns-search example.net
      
    7. Modify the default virtual network by customizing the file default.xml:
      1. Customize the IP address and subnet mask to match the values for the virbr0 interface in the file /etc/network/interfaces
      2. Turn off the Spanning Tree Protocol (STP) option.
      3. Remove the NAT and DHCP configurations.

      For example:

      root@host:~/# virsh net-edit default

      Before modification:

      <network>
           <name>default</name>
           <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
            <forward mode='nat'/>
            <bridge name='virbr0' stp='on' delay='0'/>
            <ip address='192.168.1.2' netmask='255.255.255.0'>
              <dhcp>
                <range start='192.168.1.1' end='192.168.1.254'/>
              </dhcp>
            </ip> 
      </network>
      

      After modification:

      <network>
           <name>default</name>
           <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
           <bridge name='virbr0' stp='off' delay='0'/>
           <ip address='192.168.1.2' netmask='255.255.255.0'>
           </ip>  
      </network>
      
    8. Reboot the physical machine and log in as root again.
    9. Verify that the primary network interface is mapped to the virbr0 interface.
      root@host:~/# brctl show
      bridge name   bridge id       STP enabled interfaces
        virbr0      8000.0cc47a010808   no      em1
                                                vnet1
                                                vnet2
      
     

    See Also

     

    Creating a Data Interface for a Distributed Deployment

    For a distributed deployment, you create a second bridge interface that the VMs use to send data communications to the CPE device.

    To create a data interface:

    1. Log into the central CSO server as root.
    2. Configure the new virtual interface and map it to a physical interface.

      For example:

      root@host:~/# virsh brctl addbr ex: virbr1
      root@host:~/# virsh brctl addif virbr1 eth1
    3. Create an xml file with the name virbr1.xml in the directory /var/lib/libvirt/network.
    4. Paste the following content into the virbr1.xml file, and edit the file to match the actual settings for your interface.

      For example:

       <network>
                     <name>default</name>
                     <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
                     <bridge name='virbr1' stp='off' delay='0'/>
                     <ip address='192.0.2.1' netmask='255.255.255.0'>
                     </ip>
      </network>          
    5. Open the /etc/network/interfaces file and add the details for the second interface.

      For example:

      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto eth0
      iface eth0 inet manual 
        up ifconfig eth0 0.0.0.0 up
      
      auto eth1
      iface eth1 inet manual 
        up ifconfig eth1 0.0.0.0 up
      
      auto virbr0
      iface virbr0 inet static
               bridge_ports eth0
               address 192.168.1.2
               netmask 255.255.255.0
               network 192.168.1.0
               broadcast 192.168.1.255
               gateway 192.168.1.1
               dns-nameservers 8.8.8.8
               dns-search example.net
      auto virbr1
      iface virbr1 inet static 
      				bridge_ports eth1
      				address 192.168.1.2
      				netmask 255.255.255.0
    6. Reboot the server.
    7. Verify that the secondary network interface, eth1, is mapped to the second interface.
      root@host:~/# brctl show
      bridge name   bridge id       STP enabled interfaces
        virbr0      8000.0cc47a010808   no      em1
                                                vnet1
                                                vnet2
        virbr1      8000.0cc47a010809   no      em2
                                                vnet0
    8. Configure the IP address for the interface.

      You do not specify an IP address for the data interface when you create it.

    Customizing the Configuration File for the Provisioning Tool

    The provisioning tool uses a configuration file, which you must customize for your network. The configuration file is in YAML format.

    To customize the configuration file:

    1. Log in as root to the central CSO node or server.
    2. Access the confs directory that contains the example configuration files. For example, if the name of the installer directory is csoVersion
      root@host:~/# cd csoVersion/confs
    3. Access the directory for the environment that you want to configure.

      Table 1 shows the directories that contain the example configuration file.

      Table 1: Location of Configuration Files for Provisioning VMs

      Environment

      Directory for Example Configuration File

      Demo environment without HA

      cso3.2/demo/nonha/provisionvm

      Production environment without HA

      cso3.2/production/nonha/provisionvm

      Demo environment with HA

      cso3.2/demo/ha/provisionvm

      Production environment with HA

      cso3.2/production/ha/provisionvm

    4. Make a copy of the example configuration file in the /confs directory and name it provision_vm.conf.

      For example:

      root@host:~/cspVersion/confs# cp /cso3.2/demo/nonha/provisionvm/provision_vm_example.conf provision_vm.conf
    5. Open the file provision_vm.conf with a text editor.
    6. In the [TARGETS] section, specify the following values for the network on which CSO resides.
      • installer_ip—IP address of the management interface of the host on which you deployed the installer.

      • ntp_servers—Comma-separated list of fully qualified domain names (FQDN) of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.

      • physical—Comma-separated list of hostnames of the CSO nodes or servers.

      • virtual—Comma-separated list of names of the virtual machines (VMs) on the CSO servers.

    7. Specify the following configuration values for each CSO node or server that you specified in Step 6.
      • [hostname]—Hostname of the CSO node or server

      • management_address—IP address of the Ethernet management (primary) interface in classless Internet domain routing (CIDR) notation

      • management_interface—Name of the Ethernet management interface, virbr0

      • gateway—IP address of the gateway for the host

      • dns_search—Domain for DNS operations

      • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network

      • hostname—Hostname of the node

      • username—Username for logging in to the node

      • password—Password for logging in to the node

      • data_interface—Name of the data interface. Leave blank for a centralized deployment and specify the name of the data interface, such as virbr1, that you configured for a distributed deployment.

    8. Except for the Junos Space Virtual Appliance and VRR VMs, specify configuration values for each VM that you specified in Step 6.
      • [VM name]—Name of the VM

      • management_address—IP address of the Ethernet management interface in CIDR notation

      • hostname—Fully qualified domain name (FQDN) of the VM

      • username—Login name of user who can manage all VMs

      • password—Password for user who can manage all VMs

      • local_user—Login name of user who can manage this VM

      • local_password—Password for user who can manage this VM

      • guest_os—Name of the operating system

      • host_server—Hostname of the CSO node or server

      • memory—Required amount of RAM in GB

      • vCPU—Required number of virtual central processing units (vCPUs)

      • enable_data_interface—True enables the VM to transmit data and false prevents the VM from transmitting data. The default is false.

    9. For the Junos Space VM, specify configuration values for each VM that you specified in Step 6.
      • [VM name]—Name of the VM.

      • management_address—IP address of the Ethernet management interface in CIDR notation.

      • web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)

      • gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the CSO node or server that hosts the VM.

      • nameserver_address—IP address of the DNS nameserver.

      • hostname—FQDN of the VM.

      • username—Username for logging in to Junos Space.

      • password—Default password for logging in to Junos Space.

      • newpassword—Password that you provide when you configure the Junos Space appliance.

      • guest_os—Name of the operating system.

      • host_server—Hostname of the CSO node or server.

      • memory—Required amount of RAM in GB.

      • vCPU—Required number of virtual central processing units (vCPUs).

    10. Save the file.
    11. Run the following command to start virtual machines.
      root@host:~/# ./provision_vm.sh

    The following examples show customized configuration files for the different deployments:

    Sample Configuration File for Provisioning VMs in a Demo Environment without HA

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-host
    
    # The list of virtual servers to be provisioned.
    server = csp-central-infravm, csp-central-msvm, csp-central-k8mastervm, csp-regional-infravm, csp-regional-msvm, csp-regional-k8mastervm, csp-installer-vm, csp-contrailanalytics-1, csp-vrr-vm, csp-regional-sblb
    
    
    # Physical Server Details
    [cso-host]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-host
    username = root
    password = passw0rd
    data_interface =
    
    # VM Details
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = centralinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-msvm]
    management_address = 192.168.1.5/24
    hostname = centralmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-k8mastervm]
    management_address = 192.168.1.14/24
    hostname = centralk8mastervm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 8192
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-infravm]
    management_address = 192.168.1.6/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-msvm]
    management_address = 192.168.1.7/24
    hostname = regionalmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-k8mastervm]
    management_address = 192.168.1.15/24
    hostname = regionalk8mastervm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 8192
    vcpu = 4
    enable_data_interface = false
    
    [csp-installer-vm]
    management_address = 192.168.1.10/24
    hostname = installervm.example.net
    username = root
    password = passw0rd
    local_user = installervm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-contrailanalytics-1]
    management_address = 192.168.1.11/24
    hostname = canvm.example.net
    username = root
    password = passw0rd
    local_user = installervm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    
    [csp-regional-sblb]
    management_address = 192.168.1.12/24
    hostname = regional-sblb.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host
    memory = 8192
    vcpu = 4
    enable_data_interface = true
    
    
    [csp-vrr-vm]
    management_address = 192.168.1.13/24
    hostname = vrr.example.net
    gateway = 192.168.1.1
    newpassword = passw0rd
    guest_os = vrr
    host_server = cso-host
    memory = 8192
    vcpu = 4
    
    
    [csp-space-vm]
    management_address = 192.168.1.14/24
    web_address = 192.168.1.15/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-host
    memory = 16384
    vcpu = 4
    

    Sample Configuration File for Provisioning VMs in a Production Environment Without HA

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-central-host, cso-regional-host
    
    # Note: Central and Regional physical servers are used as "csp-central-ms" and "csp-regional-ms" servers.
    
    # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list.
    server = csp-central-infravm, csp-regional-infravm, csp-installer-vm, csp-space-vm, csp-contrailanalytics-1, csp-central-elkvm, csp-regional-elkvm, csp-central-msvm, csp-regional-msvm, csp-vrr-vm, csp-regional-sblb
    
    
    # Physical Server Details
    [cso-central-host]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host
    username = root
    password = passw0rd
    data_interface =
    
    [cso-regional-host]
    management_address = 192.168.1.3/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host
    username = root
    password = passw0rd
    data_interface =
    
    
    [csp-contrailanalytics-1]
    management_address = 192.168.1.9/24
    management_interface =
    hostname = canvm.example.net
    username = root
    password = passw0rd
    vm = false
    
    
    # VM Details
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = centralinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-infravm]
    management_address = 192.168.1.5/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-space-vm]
    management_address = 192.168.1.6/24
    web_address = 192.168.1.7/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-regional-host
    memory = 32768
    vcpu = 4
    
    [csp-installer-vm]
    management_address = 192.168.1.8/24
    hostname = installer.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 65536
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-elkvm]
    management_address = 192.168.1.10/24
    hostname = centralelkvm.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    
    [csp-regional-elkvm]
    management_address = 192.168.1.11/24
    hostname = regionalelkvm.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-msvm]
    management_address = 192.168.1.12/24
    hostname = centralmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-msvm]
    management_address = 192.168.1.13/24
    hostname = regionalmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-sblb]
    management_address = 192.168.1.14/24
    hostname = regional-sblb.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 32768
    vcpu = 4
    enable_data_interface = true
    
    
    [csp-vrr-vm]
    management_address = 192.168.1.15/24
    hostname = vrr.example.net
    gateway = 192.168.1.1
    newpassword = passw0rd
    guest_os = vrr
    host_server = cso-regional-host
    memory = 8192
    vcpu = 4

    Sample Configuration File for Provisioning VMs in a Demo Environment with HA

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-host1, cso-host2, cso-host3
    
    
    # The list of virtual servers to be provisioned.
    server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-central-msvm1, csp-central-msvm2, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-regional-msvm1, csp-regional-msvm2, csp-contrailanalytics-1, csp-central-lbvm1, csp-central-lbvm2, csp-regional-lbvm1, csp-regional-lbvm2, csp-space-vm, csp-installer-vm, csp-vrr-vm, csp-regional-sblb1, csp-regional-sblb2
    
    
    # Physical Server Details
    [cso-host1]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-host1
    username = root
    password = passw0rd
    data_interface =
    
    [cso-host2]
    management_address = 192.168.1.3/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-host2
    username = root
    password = passw0rd
    data_interface =
    
    
    [cso-host3]
    management_address = 192.168.1.4/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-host3
    username = root
    password = passw0rd
    data_interface =
    
    # VM Details
    
    [csp-central-infravm1]
    management_address = 192.168.1.5/24
    hostname = centralinfravm1.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-central-infravm2]
    management_address = 192.168.1.6/24
    hostname = centralinfravm2.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-central-infravm3]
    management_address = 192.168.1.7/24
    hostname = centralinfravm3.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-central-msvm1]
    management_address = 192.168.1.8/24
    hostname = centralmsvm1.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-central-msvm2]
    management_address = 192.168.1.9/24
    hostname = centralmsvm2.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-regional-infravm1]
    management_address = 192.168.1.10/24
    hostname = regionalinfravm1.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-regional-infravm2]
    management_address = 192.168.1.11/24
    hostname = regionalinfravm2.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-regional-infravm3]
    management_address = 192.168.1.12/24
    hostname = regionalinfravm3.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-regional-msvm1]
    management_address = 192.168.1.13/24
    hostname = regionalmsvm1.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-regional-msvm2]
    management_address = 192.168.1.14/24
    hostname = regionalmsvm2.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 49152
    vcpu = 8
    enable_data_interface = false
    
    [csp-space-vm]
    management_address = 192.168.1.15/24
    web_address = 192.168.1.16/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-host3
    memory = 16384
    vcpu = 4
    
    
    [csp-installer-vm]
    management_address = 192.168.1.17/24
    hostname = installervm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    
    [csp-contrailanalytics-1]
    management_address = 192.168.1.18/24
    hostname = can1.example.net
    username = root
    password = passw0rd
    local_user = installervm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-central-lbvm1]
    management_address = 192.168.1.19/24
    hostname = centrallbvm1.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host1
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    
    [csp-central-lbvm2]
    management_address = 192.168.1.20/24
    hostname = centrallbvm2.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-lbvm1]
    management_address = 192.168.1.21/24
    hostname = regionallbvm1.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-lbvm2]
    management_address = 192.168.1.22/24
    hostname = regionallbvm2.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host2
    memory = 24576
    vcpu = 4
    enable_data_interface = false
    
    [csp-vrr-vm]
    management_address = 192.168.1.23/24
    hostname = vrr.example.net
    gateway = 192.168.1.1
    newpassword = passw0rd
    guest_os = vrr
    host_server = cso-host2
    memory = 8192
    vcpu = 4
    
    [csp-regional-sblb1]
    management_address = 192.168.1.24/24
    hostname = regional-sblb1.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 24576
    vcpu = 4
    enable_data_interface = true
    
    [csp-regional-sblb2]
    management_address = 192.168.1.25/24
    hostname = regional-sblb2.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-host3
    memory = 24576
    vcpu = 4
    enable_data_interface = true

    Sample Configuration File for Provisioning VMs in a Production Environment with HA

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-central-host1, cso-central-host2, cso-central-host3, cso-regional-host1, cso-regional-host2, cso-regional-host3
    
    
    # The list of servers to be provisioned and mention the contrail analytics servers also in "server" list.
    server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3, csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3, csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1, csp-regional-lbvm2, csp-regional-lbvm3, csp-space-vm, csp-installer-vm, csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3, csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-regional-elkvm1, csp-regional-elkvm2, csp-regional-elkvm3, csp-central-msvm1, csp-central-msvm2, csp-central-msvm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3, csp-vrr-vm, csp-regional-sblb1, csp-regional-sblb2, csp-regional-sblb3
    
    # Physical Server Details
    [cso-central-host1]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host1
    username = root
    password = passw0rd
    data_interface =
    
    [cso-central-host2]
    management_address = 192.168.1.3/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host2
    username = root
    password = passw0rd
    data_interface =
    
    [cso-central-host3]
    management_address = 192.168.1.4/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host3
    username = root
    password = passw0rd
    data_interface =
    
    
    [cso-regional-host1]
    management_address = 192.168.1.5/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host1
    username = root
    password = passw0rd
    data_interface =
    
    [cso-regional-host2]
    management_address = 192.168.1.6/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host2
    username = root
    password = passw0rd
    data_interface =
    
    [cso-regional-host3]
    management_address = 192.168.1.7/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host3
    username = root
    password = passw0rd
    data_interface =
    
    
    [csp-contrailanalytics-1]
    management_address = 192.168.1.17/24
    management_interface =
    hostname = can1.example.net
    username = root
    password = passw0rd
    vm = false
    
    [csp-contrailanalytics-2]
    management_address = 192.168.1.18/24
    management_interface =
    hostname = can2.example.net
    username = root
    password = passw0rd
    vm = false
    
    [csp-contrailanalytics-3]
    management_address = 192.168.1.19/24
    management_interface =
    hostname = can3.example.net
    username = root
    password = passw0rd
    vm = false
    
    # VM Details
    
    [csp-central-infravm1]
    management_address = 192.168.1.8/24
    hostname = centralinfravm1.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host1
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-central-infravm2]
    management_address = 192.168.1.9/24
    hostname = centralinfravm2.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host2
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-central-infravm3]
    management_address = 192.168.1.10/24
    hostname = centralinfravm3.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host3
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-infravm1]
    management_address = 192.168.1.11/24
    hostname = regionalinfravm1.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host1
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-infravm2]
    management_address = 192.168.1.12/24
    hostname = regionalinfravm2.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host2
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-infravm3]
    management_address = 192.168.1.13/24
    hostname = regionalinfravm3.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host3
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    
    [csp-space-vm]
    management_address = 192.168.1.14/24
    web_address = 192.168.1.15/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-central-host2
    memory = 32768
    vcpu = 4
    
    [csp-installer-vm]
    management_address = 192.168.1.16/24
    hostname = installervm.example.net
    username = root
    password = passw0rd
    local_user = installervm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-lbvm1]
    management_address = 192.168.1.20/24
    hostname = centrallbvm1.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    
    [csp-central-lbvm2]
    management_address = 192.168.1.21/24
    hostname = centrallbvm2.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host2
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-lbvm3]
    management_address = 192.168.1.22/24
    hostname = centrallbvm3.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host3
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-lbvm1]
    management_address = 192.168.1.23/24
    hostname = regionallbvm1.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-lbvm2]
    management_address = 192.168.1.24/24
    hostname = regionallbvm2.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host2
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-lbvm3]
    management_address = 192.168.1.25/24
    hostname = regionallbvm3.example.net
    username = root
    password = passw0rd
    local_user = lbvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host3
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-elkvm1]
    management_address = 192.168.1.26/24
    hostname = centralelkvm1.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-elkvm2]
    management_address = 192.168.1.27/24
    hostname = centralelkvm2.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host2
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-elkvm3]
    management_address = 192.168.1.28/24
    hostname = centralelkvm3.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host3
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    
    [csp-regional-elkvm1]
    management_address = 192.168.1.29/24
    hostname = regionalelkvm1.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-elkvm2]
    management_address = 192.168.1.30/24
    hostname = regionalelkvm2.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host2
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-regional-elkvm3]
    management_address = 192.168.1.31/24
    hostname = regionalelkvm3.example.net
    username = root
    password = passw0rd
    local_user = elkvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host3
    memory = 32768
    vcpu = 4
    enable_data_interface = false
    
    [csp-central-msvm1]
    management_address = 192.168.1.32/24
    hostname = centralmsvm1.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host1
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-central-msvm2]
    management_address = 192.168.1.33/24
    hostname = centralmsvm2.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host2
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-central-msvm3]
    management_address = 192.168.1.34/24
    hostname = centralmsvm3.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host3
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-msvm1]
    management_address = 192.168.1.35/24
    hostname = regionalmsvm1.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host1
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-msvm2]
    management_address = 192.168.1.36/24
    hostname = regionalmsvm2.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host2
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-msvm3]
    management_address = 192.168.1.37/24
    hostname = regionalmsvm3.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host3
    memory = 65536
    vcpu = 16
    enable_data_interface = false
    
    [csp-regional-sblb1]
    management_address = 192.168.1.38/24
    hostname = regional-sblb1.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host1
    memory = 32768
    vcpu = 4
    enable_data_interface = true
    
    [csp-regional-sblb2]
    management_address = 192.168.1.39/24
    hostname = regional-sblb2.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host2
    memory = 32768
    vcpu = 4
    enable_data_interface = true
    
    [csp-regional-sblb3]
    management_address = 192.168.1.40/24
    hostname = regional-sblb3.example.net
    username = root
    password = passw0rd
    local_user = sblb
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host3
    memory = 32768
    vcpu = 4
    enable_data_interface = true
    
    
    [csp-vrr-vm]
    management_address = 192.168.1.41/24
    hostname = vrr.example.net
    gateway = 192.168.1.1
    newpassword = passw0rd
    guest_os = vrr
    host_server = cso-regional-host3
    memory = 32768
    vcpu = 4

    Provisioning VMs with the Provisioning Tool

    If you use the KVM hypervisor on the CSO node or server, you can use the provisioning tool to:

    • Create and configure the VMs for the CSO and Junos Space components.

    • Install the operating system in the VMs:

      • Ubuntu in the CSO VMs

      • Junos Space Network Management Platform software in the Junos Space VM

    Note: If you use another supported hypervisor or already created VMs that you want to use, provision the VMs manually.

    To provision VMs with the provisioning tool:

    1. Log in as root to the central CSO node or server.
    2. Access the directory for the installer. For example, if the name of the installer directory is csoVersion:
      root@host:~/# cd /~/csoVersion/
    3. Run the provisioning tool.
      root@host:~/cspVersion/# ./provision_vm.sh

      The provisioning begins.

    4. During installation, observe detailed messages in the log files about the provisioning of the VMs.
      • provision_vm.log—Contains details about the provisioning process

      • provision_vm_console.log—Contains details about the VMs

      • provision_vm_error.log—Contains details about errors that occur during provisioning

      For example:

      root@host:~/cspVersion/# cd logs
      root@host:/cspVersion/logs/# tailf provision_vm.log

    Manually Provisioning VMs on the Contrail Service Orchestration Node or Server

    If you use the VMware ESXi hypervisor, you must provision the VMs manually. If you use the KVM hypervisor, you can use the provisioning tool to provision the VMs automatically.

    To manually provision VMs on each CSO node or server:

    1. Review Software Tested for the COTS Nodes and Servers for the required operating system for the physical servers and VMs. You may need to use multiple operating systems.
    2. Download and configure the specified Ubuntu images on your servers.
      1. Copy the required Ubuntu images from the Ubuntu website to separate directories on your server.
      2. Create an Ubuntu Cloud virtual machine disk (VMDK) for each of the images that you downloaded.

        For example:

        root@host:~/# cd ubuntu-version
        root@host:~/# qemu-img convert -O vmdk ubuntu-14.04-server-cloudimg-amd64-disk1.img ubuntu-14.04-server-cloudimg-amd64-disk1.vmdk
      3. Specify the default password for Ubuntu by creating a text file called user-data.txt with the following content in each of the Ubuntu directories.
        #cloud-config password: ubuntu
      4. Specify the default local host for Ubuntu by creating a text file called meta-data.txt with the following content in each the Ubuntu directories.
        local-hostname: localhost
      5. Create a file called seed.iso that contains the default password and host in each of the Ubuntu directories.
        root@host:~/# genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
      6. Create the VMs manually using the appropriate image for the type of VM. See Software Tested for the COTS Nodes and Serversfor the required operating system for each VM.
    3. On each CSO node or server, create VMs or reconfigure existing VMs:

      See Minimum Requirements for Servers and VMs for details of the VMs and associated resources required for each environment.

    4. Configure FQDNs and specify IP addresses for the Ethernet Management interfaces on each VM.
    5. Configure read, write, and execute permissions for the users of the VMs, so that the installer can access the VMs when you deploy CSO.
    6. Configure DNS and Internet access for the VMs.
    7. If MySQL software is installed in the VMs for Service and infrastructure Monitor, remove it.

      When you install the CSO, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly.

    8. Install OpenSSH on the VMs.
      1. Issue the following commands to install the OpenSSH server and client tools.
        root@host:~/# apt-get install openssh-server
        root@host:~/# apt-get install openssh-client
      2. Set the PermitRootLogin value in the /etc/ssh/sshd_config file to Yes.

        This action enables root login through Secure Shell (SSH).

    Verifying Connectivity of the VMs

    From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the CSO.

    Caution: If the VMs cannot communicate with all the other hosts in the deployment, the installation can fail.

    Copying the Installer Package to the Installer VM

    After you have provisioned the VMs, copy the installer package from the central server to the installer VM, and expand the image.

    Modified: 2018-05-10