Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Provisioning VMs on Contrail Service Orchestration Nodes or Servers

    Several Virtual Machines (VMs) on the central and regional Contrail Service Orchestration nodes or servers host the infrastructure services and some other components. You can:

    • Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor on a Contrail Service Orchestration node or server.

      The tool also installs Ubuntu in the VMs.

    • Create and configure the VMs manually if you use a supported hypervisor other than KVM on the Contrail Service Orchestration node or server.
    • Manually configure VMs that you already created on a Contrail Service Orchestration node or server.

    The VMs required on a Contrail Service Orchestration node or server depend on whether you configure a demonstration (demo) environment or a production environment.

    Table 1 shows complete details about the VMs required for a production environment. This configuration requires:

    • One central Contrail Service Orchestration node or server, cso-central-host
    • One regional Contrail Service Orchestration node or server, cso-regional-host

    Table 1: Details of VMs and Microservice Collections for a Production Environment

    Name of VM or Microservice Collection

    VM or Microservice Collection

    Contrail Service Orchestration Node or Server for this VM

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-central-infravm

    VM

    cso-central-host

    • Kibana
    • Logstash
    • Elasticsearch
    • Cassandra
    • Zookeeper
    • MariaDB
    • Keystone
    • RabbitMQ
    • Redis
    • DMS
    • 8 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 3.

    csp-central-ms

    Microservice collection

    cso-central-host

    All microservices, including GUI applications, plus the following components:

    • HAProxy Configuration
    • ETCD
    • Kubemaster
    • Kubeminion
    • SIM client
    • 48 CPUs
    • 256 GB RAM

    See Table 3.

    csp-regional-infravm

    VM

    cso-regional-host

    • Cassandra
    • Elasticsearch
    • Zookeeper
    • RabbitMQ
    • Kibana
    • Logstash
    • Redis
    • 8 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 3.

    csp-regional-ms

    Microservice collection

    cso-regional-host

    All microservices, including GUI applications, plus the following components:

    • HAProxy Configuration
    • ETCD
    • Kubemaster
    • Kubeminion
    • SIM client
    • 48 CPUs
    • 256 GB RAM

    See Table 3.

    csp-space-vm

    VM

    cso-regional-host

    Junos Space Virtual Appliance and database

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 3.

    csp-contrail-analytics-vm

    VM

    cso-regional-host

    For a distributed deployment, the administrators install Contrail Service Orchestration ( contrail_analytics) on this VM.

    For a centralized deployment, you can use the Contrail OpenStack in the CCRA.

    • 8 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 3.

    Table 2 shows complete details about the VMs for a demo environment. This configuration requires: one Contrail Service Orchestration node or server, and you configure all VMs on that node.

    Table 2: Details of VMs for a Demo Environment

    Name of VM

    Contrail Service Orchestration Node or Server for this VM

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-central-infravm

    cso-central-host

    • Kibana
    • Logstash
    • Elasticsearch
    • Cassandra
    • Zookeeper
    • MariaDB
    • Keystone
    • RabbitMQ
    • Redis
    • DMS
    • 4 CPU
    • 16 GB RAM
    • 200 GB hard disk storage

    See Table 3.

    csp-central-msvm

    cso-central-host

    All microservices, including GUI applications, plus the following components:

    • HAProxy Configuration
    • ETCD
    • Kubemaster
    • Kubeminion
    • SIM client
    • 4 vCPUs
    • 16 GB RAM (Minimum requirement)
    • 200 GB hard disk storage

    See Table 3.

    csp-regional-infravm

    cso-central-host

    • Cassandra
    • Elasticsearch
    • Zookeeper
    • RabbitMQ
    • Kibana
    • Logstash
    • Redis
    • 4 vCPUs
    • 16 GB RAM
    • 200 GB hard disk storage

    See Table 3.

    csp-regional-msvm

    cso-central-host

    All microservices, including GUI applications, plus the following components:

    • HAProxy Configuration
    • ETCD
    • Kubemaster
    • Kubeminion
    • SIM client
    • 4 vCPUs
    • 16 GB RAM
    • 200 GB hard disk storage

    See Table 3.

    csp-space-vm

    cso-central-host

    Junos Space Virtual Appliance and database

    • 4 vCPUs
    • 16 GB RAM
    • 200 GB hard disk storage

    See Table 3.

    csp-contrail-analytics-vm

    cso-central-host

    For a distributed deployment, the administrators install Contrail Service Orchestration ( contrail_analytics) on this VM.

    For a centralized deployment, you can use the Contrail OpenStack in the CCRA.

    • 4 vCPUs
    • 16 GB RAM
    • 200 GB hard disk storage

    See Table 3.

    Table 3 lists the ports that you need to open for each VM.

    Table 3: Ports to Open on VMs in the Cloud CPE Solution

    80

    2379

    5000

    5671

    8086

    15100

    81

    2380

    5044

    5672

    9090

    15672

    82

    2888

    5601

    6379

    9160

    30000 through 32767

    83

    3306

    5664

    8082

    9200

    35357

    2181

    3888

    5665

    8085

    10000

    The following sections describe the procedures for provisioning the VMs:

    Before You Begin

    Before you begin you must:

    • Configure the servers and nodes in Contrail Cloud Reference Architecture (CCRA) for a centralized deployment.
    • Download third-party software and deploy the installer.

    Creating a Bridge Interface to Support VMs

    If you use the KVM hypervisor, before you create VMs, you must create a bridge interface on the physical server that maps the primary network interface (Ethernet management interface) on each Contrail Service Orchestration node or server to a virtual interface.

    To create the bridge interface:

    1. On the Contrail Service Orchestration node or server, log in as root.
    2. Update the index files of the software packages installed on the server to reference the latest versions.
      root@host:~/# apt-get update
                              
    3. View the network interfaces configured on the server to obtain the name of the primary interface on the server.
      root@host:~/# ifconfig
                              
    4. Install the libvirt software.
      root@host:~/# apt-get install libvirt-bin
                              
    5. View the list of network interfaces, which now includes the virtual interface virbr0.
      root@host:~/# ifconfig
                              

    6. Modify the file /etc/network/interfaces to map the primary network interface to the virtual interface virbr0.

      For example, use the following configuration to map the primary interface eth0 to the virtual interface virbr0:

      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto eth0
      iface eth0 inet manual
      up ifconfig eth0 0.0.0.0 up
      
      auto virbr0
      iface virbr0 inet static
               bridge_ports eth0
               address 192.168.1.2
               netmask 255.255.255.0
               network 192.168.1.0
               broadcast 192.168.1.255
               gateway 192.168.1.1
               dns-nameservers 8.8.8.8
               dns-search example.net
      
    7. Modify the default virtual network by customizing the file default.xml:

      1. Customize the IP address and subnet mask to match the values for the virbr0 interface in the file /etc/network/interfaces
      2. Turn off the Spanning Tree Protocol (STP) option.
      3. Remove the NAT and DHCP configurations.

      For example:

      root@host:~/# virsh net-edit default
      <network>
           <name>default</name>
           <uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
           <bridge name='virbr0' stp='off' delay='0'/>
           <ip address='192.168.1.2' netmask='255.255.255.0'>
           </ip>
       </network>
      

    8. Reboot the node and log in as root again.
    9. Verify that the primary network interface is mapped to the virbr0 interface.
      root@host:~/# brct1 show

    Customizing the Configuration File for the Provisioning Tool

    The provisioning tool uses a configuration file, which you must customize for your network. Example configuration files are located in the confs/provisionvm directory of the installer directory. The configuration file is in YAML format.

    To customize the configuration file:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the directory that contains the example configuration files. For example, if the name of the installer directory is cspVersion:
      root@host:~/# cd cspVersion/confs/provisionvm/
                              

      The following example configuration files are available at /confs/provisionvm/:

      • provision_vm_prod_nonha_env_example.conf—Use this file for a production environment.
      • provision_vm_demo_nonha_env_example.conf—Use this file for a demo environment.
    3. Make a copy of the example configuration file and name it provision_vm.conf.
      root@host:~/cspVersion/provision_vm# cp provision_vm_demo_nonha_env_example.conf provision_vm.conf
                              
    4. Open the file provision_vm.conf with a text editor.
    5. In the [TARGETS] section, specify the following values for the network on which the Cloud CPE solution resides.
      • installer_ip—IP address of the management interface of the host on which you deployed the installer.
      • ntp_servers—Comma-separated list of Network Time Protocol (NTP) servers. For networks within firewalls, specify NTP servers specific to your network.
      • physical—Comma-separated list of hostnames of the Contrail Service Orchestration nodes or servers.
      • virtual—Comma-separated list of names of the virtual machines (VMs) on the Contrail Service Orchestration servers.
    6. Specify the following configuration values for each Contrail Service Orchestration node or server that you specified in Step 5.
      • [hostname]—Hostname of the Contrail Service Orchestration node or server
      • management_address—IP address of the Ethernet management (primary) interface
      • management_interface—Name of the Ethernet management interface, virbr0
      • gateway—IP address of the gateway for the host
      • dns_search—Domain for DNS operations
      • dns_servers—Comma-separated list of DNS name servers, including DNS servers specific to your network
      • hostname—Hostname of the node
      • username—Username for logging in to the node
      • password—Password for logging in to the node
    7. Except for the Junos Space Virtual Appliance and Junos Space database VMs, specify configuration values for each VM that you specified in Step 5.
      • [VM name]—Name of the VM
      • management_address—IP address of the Ethernet management interface
      • hostname—Fully qualified domain name (FQDN) of the VM
      • username—Login name of user who can manage all VMs
      • password—Password for user who can manage all VMs
      • local_user—Login name of user who can manage this VM
      • local_password—Password for user who can manage this VM
      • guest_os—Name of the operating system
      • host_server—Hostname of the Contrail Service Orchestration node or server
      • memory—Required amount of RAM in GB
      • vCPU—Required number of virtual central processing units (vCPUs)
    8. For the Junos Space Virtual Appliance and Junos Space database VMs, specify configuration values for each VM that you specified in Step 5.
      • [VM name]—Name of the VM.
      • management_address—IP address of the Ethernet management interface.
      • web_address—Virtual IP (VIP) address of the primary Junos Space Virtual Appliance. (Setting only required for the VM on which the primary Junos Space Virtual Space appliance resides.)
      • gateway—IP address of the gateway for the host. If you do not specify a value, the value defaults to the gateway defined for the Contrail Service Orchestration node or server that hosts the VM.
      • nameserver_address—IP address of the DNS nameserver.
      • hostname—FQDN of the VM.
      • username—Username for logging in to Junos Space.
      • password—Default password for logging in to Junos Space.
      • newpassword—Password that you provide when you configure the Junos Space appliance.
      • guest_os—Name of the operating system.
      • host_server—Hostname of the Contrail Service Orchestration node or server.
      • memory—Required amount of RAM in GB.
      • vCPU—Required number of virtual central processing units (vCPUs).
      • spacedb—(Only for Junos Space database VMs) true.
    9. In the [MYSQL] section, specify the following configuration settings:
      • remote_user—Username for logging in to the Junos Space database
      • remote_password—Password for logging in to the Junos Space database
    10. Save the file.
    11. Run the following command to start virtual machines.
      root@host:~/# ./provision_vm.sh

    The following examples show customized configuration files Contrail Service Orchestration installations in production and demo environments.

    Sample Configuration File for Provisioning VMs in a Production Environment

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-central-host, cso-regional-host
    
    # Note: Central and Regional physical servers are used as "csp-central-ms" and "csp-regional-ms" servers.
    
    # The list of virtual servers to be provisioned.
    virtual = csp-central-infravm, csp-regional-infravm, csp-installer-vm, csp-space-vm, csp-contrail-analytics-vm
    
    
    # Physical Server Details
    [cso-central-host]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host
    username = root
    password = passw0rd
    
    [cso-regional-host]
    management_address = 192.168.1.3/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host
    username = root
    password = passw0rd
    
    # VM Details
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = centralinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 49152
    vcpu = 8
    
    
    [csp-regional-infravm]
    management_address = 192.168.1.5/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 49152
    vcpu = 8
    
    
    [csp-space-vm]
    management_address = 192.168.1.6/24
    web_address = 192.168.1.7/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-regional-host
    memory = 32768
    vcpu = 4
    
    [csp-installer-vm]
    management_address = 192.168.1.8/24
    hostname = installer.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 32768
    vcpu = 4
    
    [csp-contrail-analytics-vm]
    management_address = 192.168.1.9/24
    hostname = canvm.example.net
    username = root
    password = passw0rd
    local_user = canvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 32768
    vcpu = 8

    Sample Configuration File for Provisioning VMs in a Demo Environment

    # This config file is used to provision KVM-based virtual machines using lib virt manager.
    
    [TARGETS]
    # Mention primary host (installer host) management_ip
    
    installer_ip =
    
    ntp_servers = ntp.juniper.net
    
    # The physical server where the Virtual Machines should be provisioned
    # There can be one or more physical servers on
    # which virtual machines can be provisioned
    physical = cso-central-host, cso-regional-host
    
    # The list of virtual servers to be provisioned.
    virtual = csp-central-infravm, csp-central-msvm, csp-regional-infravm, csp-regional-msvm, csp-space-vm, csp-installer-vm, csp-contrail-analytics-vm
    
    
    # Physical Server Details
    [cso-central-host]
    management_address = 192.168.1.2/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-central-host
    username = root
    password = passw0rd
    
    [cso-regional-host]
    management_address = 192.168.1.3/24
    management_interface = virbr0
    gateway = 192.168.1.1
    dns_search = example.net
    dns_servers = 192.168.10.1
    hostname = cso-regional-host
    username = root
    password = passw0rd
    
    # VM Details
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = centralinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 16384
    vcpu = 4
    
    [csp-central-msvm]
    management_address = 192.168.1.5/24
    hostname = centralmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-central-host
    memory = 16384
    vcpu = 4
    
    [csp-regional-infravm]
    management_address = 192.168.1.6/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    local_user = infravm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 16384
    vcpu = 4
    
    [csp-regional-msvm]
    management_address = 192.168.1.7/24
    hostname = regionalmsvm.example.net
    username = root
    password = passw0rd
    local_user = msvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 16384
    vcpu = 4
    
    
    [csp-space-vm]
    management_address = 192.168.1.8/24
    web_address = 192.168.1.9/24
    gateway = 192.168.1.1
    nameserver_address = 192.168.1.254
    hostname = spacevm.example.net
    username = admin
    password = abc123
    newpassword = jnpr123!
    guest_os = space
    host_server = cso-regional-host
    memory = 16384
    vcpu = 4
    
    [csp-installer-vm]
    management_address = 192.168.1.10/24
    hostname = installervm.example.net
    username = root
    password = passw0rd
    local_user = installervm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 16384
    vcpu = 4
    
    [csp-contrail-analytics-vm]
    management_address = 192.168.1.11/24
    hostname = canvm.example.net
    username = root
    password = passw0rd
    local_user = canvm
    local_password = passw0rd
    guest_os = ubuntu
    host_server = cso-regional-host
    memory = 16384
    vcpu = 4

    Provisioning VMs with the Provisioning Tool

    If you use the KVM hypervisor on the server that supports the Contrail Service Orchestration node or server, you can use the provisioning tool to:

    • Create and configure the VMs for the Contrail Service Orchestration and Junos Space components.
    • Install the operating system in the VMs:
      • Ubuntu in the Contrail Service Orchestration VMs
      • Junos Space Network Management Platform software in the Junos Space VMs

    Note: If you use another supported hypervisor or already created VMs that you want to use, provision the VMs manually.

    To provision VMs with the provisioning tool:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the directory for the installer. For example, if the name of the installer directory is cspVersion:
      root@host:~/# cd /~/cspVersion/
                              
    3. Run the provisioning tool.
      root@host:~/cspVersion/# ./provision_vm.sh
                              

      The provisioning begins.

    4. During installation, observe detailed messages in the log files about the provisioning of the VMs.
      • provision_vm.log—Contains details about the provisioning process
      • provision_vm_console.log—Contains details about the VMs
      • provision_vm_error.log—Contains details about errors that occur during provisioning

      For example:

      root@host:~/cspVersion/# cd logs
      root@host:/cspVersion/logs/# tailf provision_vm.log

    Manually Provisioning VMs on the Contrail Service Orchestration Node or Server

    To manually provision VMs on each Contrail Service Orchestration node or server:

    1. On each Contrail Service Orchestration node or server, create VMs or reconfigure existing VMs:
      • If you use a production environment, create 6 VMs with the resources listed in Table 1.
      • If you use a demo environment, create 6 VMs with the resources listed in Table 2.
    2. Configure hostnames and specify IP addresses for the Ethernet Management interfaces on each VM.
    3. Configure read, write, and execute permissions for the users of the VMs, so that the installer can access the VMs when you deploy the Cloud CPE solution.
    4. Configure DNS and Internet access for the VMs.
    5. If MySQL software is installed in the VMs for Service and infrastructure Monitor, remove it.

      When you install the Cloud CPE solution, the installer deploys and configures MySQL servers in this VM. If the VM already contains MySQL software, the installer may not set up the VM correctly.

    Verifying Connectivity of the VMs

    From each VM, verify that you can ping the IP addresses and hostnames of all the other servers, nodes, and VMs in the Cloud CPE solution.

    Caution: If the VMs cannot communicate with all the other hosts in the deployment, the installation can fail.

    Modified: 2016-10-12