Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Installing and Configuring Contrail Service Orchestration

 

You use the same installation process for both Contrail Service Orchestration (CSO) and Network Service Controller and for both KVM and ESXi environments.

Before You Begin

Before you begin:

  • Provision the virtual machines (VMs) for the CSO node or server. (See Provisioning VMs on Contrail Service Orchestration Nodes or Servers).

  • Copy the installer package to the installer VM and expand it. (See Setting Up the Installation Package and Library Access)

  • If you have created an installer VM using the provisioning tool, you must copy the /Contrail_Service_Orchestration_4.0.0/confs/provision_vm.conf file from the Ubuntu VM to the /csoversion/confs/provision_vm.conf directory of the installer VM.

  • If you use an external server rather than the installer VM for the private repository that contains the libraries for the installation, create the repository on the server. (See Setting Up the Installation Package and Library Access).

    The installation process uses a private repository so that you do not need Internet access during the installation.

  • Determine the following information:

    • The size of deployment: small, medium, large

    • For large deployment, 2 regions are configured by default - central and regional. You can add up to 2 additional regions with desired names. (e.g.- "Tokyo" or "East-Coast").

    • The IP address of the VM that hosts the installer.

    • The time zone for the servers in the deployment, based on the Ubuntu time zone guidelines.

      The default value for this setting is the current time zone of the installer host.

    • The fully qualified domain name (FQDN) of each Network Time Protocol (NTP) server that the solution uses. For networks within firewalls, use NTP servers specific to your network.

      For example: ntp.example.net

    • If you want to access the Administration Portal with the single sign-on method, enter the name of the public domain in which the CSO servers reside. Alternatively if you want to access the Administration Portal with local authentication, you need to enter a dummy domain name.

    • For a distributed deployment, whether you use transport layer security (TLS) to encrypt data that passes between the CPE device and CSO.

      You must use TLS unless you have an explicit reason for not encrypting data between the CPE device and CSO.

    • Whether you use the CSO Keystone or an external Keystone for authentication of CSO operations.

      • A CSO Keystone is installed with CSO and resides on the central CSO server.

        This default option is recommended for all deployments, and is required for a distributed deployment. Use of a CSO Keystone offers enhanced security because the Keystone is dedicated to CSO and is not shared with any other applications.

      • An external Keystone resides on a different server to the CSO server and is not installed with CSO.

        You specify the IP address and access details for the Keystone during the installation.

        • The Contrail OpenStack Keystone in the Contrail Cloud Platform for a centralized deployment is an example of an external Keystone.

          In this case, customers and Cloud CPE infrastructure components use the same Keystone token.

        • You can also use your own external Keystone that is not part of the CSO or Contrail OpenStack installation.

    • If you use an external Keystone, the username and service token.

    • The IP address of the Contrail controller node for a centralized deployment. For a centralized deployment, you specify this external server for Contrail Analytics.

    • Whether you use a common password for all VMs or a different password for each VM, and the value of each password.

    • The CIDR address of the subnet on which the CSO VMs reside.

    • If you use NAT with your CSO installation, the public IP addresses used for NAT for the central and regional regions.

    • The primary interface for all VMs.

      The default is eth0.

    • The following information for each server and VM in the deployment:

      • Management IP address in CIDR notation

        For example: 192.0.2.1/24

      • FQDN of each host

        For example: central-infravm.example.net

      • Password for the root user

        If you use the same password for all the VMs, you can enter the password once. Otherwise, you must provide the password for each VM.

    • For the microservices in the central and each regional region:

      • The IP address of the Kubernetes overlay network address in Classless Interdomain Routing (CIDR) notation.

        The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address with a /16 subnet.

      • The range of the Kubernetes service overlay network addresses, in CIDR notation.

        The default value is 192.168.3.0/24.

      • The IP address of the Kubernetes service API server, which is on the service overlay network.

        This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

      • The IP address of the Kubernetes Cluster Domain Name System (DNS)

        This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

      • The tunnel interface unit range that CSO uses for an SD-WAN implementation with an MX Series hub device.

        You must choose values that are different to those that you configured for the MX Series router. The possible range of values is 0–16,385, and the default range is 4000–6000.

      • The FQDN that the load balancer uses to access the installation.

        • For the small deployment, the IP address and the FQDN of the VM that hosts the HAProxy.

        • For medium or large deployments, the virtual IP address and the associated hostname that you configure for the HAProxy.

      • The replication factor for each microservice is pre-determined based on the size of the deployment.

Creating the Configuration Files

You use an interactive script to create configuration files for the environment topology. The installer uses these configuration files to customize the topology when you deploy the solution.

To run the installation tool:

  1. Log in as root to the host on which you deployed the installer.
  2. Access the directory for the installer. For example, if the name of the installer directory is csoVersion:
    root@host:~/# cd /~/csoVersion/
  3. Run the setup tool:
    root@host:~/cspVersion/# ./setup_assist.sh

    The script starts, sets up the installer, and requests that you enter information about the installation.

  4. Specify the management IP address of the VM that hosts the installer file.
  5. Specify the deployment environment:
    • trial—Trial environment

    • production—Production environment

  6. Specify whether CSO is behind Network Address Translation (NAT).
  7. Accept the default time zone or specify the Ubuntu time zone for the servers in the topology.
  8. Specify a comma-separated list of FQDNs of NTP servers.

    For example: ntp.example.net, ntp.example.com

  9. Specify whether the deployment uses high availability (HA).
    • y—Deployment uses HA

    • n—Deployment does not use HA

  10. Specify whether the deployment has multiple regions
    • y—Deployment uses single region

    • n—Deployment uses multiple regions

  11. Press enter if you use only one region or specify a comma-separated list or regions if you use multiple regions. You can configure a maximum of three regions. The default region is regional.
  12. Specify the CSO certificate validity in days.

    The default value is 365 days.

  13. For a distributed deployment, specify whether you use TLS to enable secure communication between the CPE device and CSO.

    Accept the default unless you have an explicit reason for not using encryption for communications between the CPE device and CSO.

    • n—Specifies that TLS is not used.

    • y—Specifies use of TLS. This is the default setting.

  14. Specify the e-mail address of Admin User.
  15. Specify a domain name to determine how you access the Administration Portal, the main CSO GUI:
    • If you want to access the Administration Portal with the single sign-on method, specify the name of the public domain in which the CSO servers reside.

      For example: organization.com, where organization is the name of your organization.

    • If you want to use local authentication for the Administration portal, you specify a dummy name.

      For example: example.net

  16. Specify whether you use an external Keystone to authenticate CSO operations, and if so, specify the OpenStack Keystone service token.
    • n—Specifies use of the CSO Keystone which is installed with and dedicated to CSO. We recommend that you use this default option unless you have a specific requirement for an external Keystone.

    • y—Specifies use of an external OpenStack Keystone, such as a Keystone specific to your network. Select the IP address and access details for the Keystone.

  17. Specify whether you use an external Contrail Analytics server:
    • y—Specifies use of Contrail Analytics in Contrail OpenStack for a centralized or combined deployment.

      You must provide the IP address of the Contrail controller node.

    • n—Specifies use of the Contrail Analytics VM for a distributed deployment.

  18. Specify whether you use a common password for all CSO VMs, and if so, specify the password.
  19. Specify the following information for the virtual route reflector (VRR) that you create:
    1. Specify whether VRR is behind NAT.

      • Specify the number of VRR instances.

        • For non-HA deployments, you must create at least one VRR.

        • For HA deployments, we recommend that you create VRRs in even numbers, and you must create at least two VRRs. Each VRR must be in a different redundancy group. If the primary VRR fails or connectivity is lost, the session remains active as the secondary VRR continues to receive and advertise LAN routes to a site, thereby providing redundancy.

      • y—VRR is behind NAT. If you are deploying a VRR in a private network, the NAT instance translates all requests (BGP traffic) to a VRR from a public IP address to a private IP address.

      • n—VRR is not behind NAT (default).

    2. Specify whether you use a common password for all VRRs.

      • y—Specify the common password for all VRRs.

      • n—Specify the password for each VRR.

    3. Specify the public IP address for each VRR that you create. For example, 192.0.20.118/24.

    4. Specify the redundancy group for each VRR that you have created.

      • For non-HA deployments, specify the redundant group of the VRR as zero.

      • For HA deployments, the VRRs must be distributed among the redundancy groups. There can be two groups—group 0 and group 1. For example, if you have two VRRs, specify the redundancy group for VRR1 as 0 and the VRR2 as 1.

  20. Starting with the central region, specify the following information for each server in the deployment of each region.

    The script prompts you for each set of information that you must enter.

    • Management IP address with CIDR

      For example: 192.0.2.1/24

    • Password for the root user (only required if you use different passwords for each VM)

    • The IP address of the Kubernetes overlay network address, in CIDR notation, that the microservices use.

      The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address with a /16 subnet.

    • The range of the Kubernetes service overlay network addresses, in CIDR notation.

      The default value is 192.168.3.0/24. It is unlikely that there will be a conflict between this default and your network, so you can usually accept the default. If, however, there is a conflict with your network, use a similar address with a /24 subnet.

    • The IP address of the Kubernetes service API server, which is on the service overlay network.

      This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

    • The IP address of the Kubernetes Cluster DNS server.

      This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

    • Specify the range of tunnel interface units that CSO uses for an SD-WAN implementation with an MX Series hub device

      The default setting is 4000–6000. You specify values in the range 0–16,385 that are different to those that you configured on the MX Series router.

    • The IP address and FQDN of the host for the load balancer:

      • For non-HA deployments, the IP address and FQDN of the VM that hosts the HAProxy.

      • For HA deployments, the virtual IP address and associated FQDN that you configure for the HAproxy.

    • The replication factor for each microservice is pre-determined based on the size of the deployment.

    The tool uses the input data to configure each region and indicates when the configuration stage is complete.

  21. Configure settings for each region in the deployment:
    • Specify the IP address and prefix of the Kubernetes overlay network that the microservices use.

    • Specify the fully-qualified domain names of the host for the load balancer.

      • For a non-HA deployment, the IP address or FQDN of the VM that hosts the HAProxy

      • For an HA deployment, the virtual IP address that you configure for the HAProxy.

    • Specify a unique virtual router identifier in the range 0–255 for the HAProxy VM in each region.

      Note

      Use a different number for this setting in each region.

    • Specify the number of instances of microservices:

      • For non-HA installations, specify 1.

      • For HA installations, specify 2.

    The tool uses the input data to configure each region and indicates when the configuration stage is complete.

  22. Specify the subnet in CIDR notation on which the CSO VMs reside.

    The script requires this input, but uses the value only for distributed deployments and not for centralized deployments.

  23. Specify the range for tunnel interface unit.
  24. Accept or specify the primary interface for all VMs.

    The default is eth0. Accept this value unless you have explicitly changed the primary interface on your host of VMs.

  25. When all regions are configured, the tool starts displaying the deployment commands.
    root@host:~/# DEPLOYMENT_ENV=central ./deploy_infra_services.sh
    root@host:~/# DEPLOYMENT_ENV=regional ./deploy_infra_services.sh
    root@host:~/# DEPLOYMENT_ENV=central ./deploy_micro_services.sh
    root@host:~/# DEPLOYMENT_ENV=regional ./deploy_micro_services.sh
Note

The password for each infrastructure component and the Administration Portal password are displayed on the console after you complete answering the Setup Assistance questions. You must note the password that is displayed on the console as they are not saved in the system. To enhance the password security, the length and pattern for each password is different and the password is encrypted, and passwords in the log file are masked.

Deploying Infrastructure Services

To deploy infrastructure services:

  1. Log in as root to the Installer VM.
  2. Deploy the central infrastructure services.
    root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
    Caution

    Wait at least ten minutes before executing the next command. Otherwise, the microservices might not be deployed correctly.

  3. Deploy the regional infrastructure services and wait for the process to complete.
    root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"

    If you have configured multiple regions, then you can deploy the infrastructure services on the regions in any order after deploying the central infrastructure.

Note

The deploy_infra_services.sh script performs a health check of infrastructure services. If you encounter an error, you must rerun the deploy_infra_services.sh script.

Deploying Microservices

To deploy the microservices:

  1. Log in as root to the Installer VM.
  2. Deploy the central microservices.
    root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
    Caution

    Wait at least ten minutes before executing the next command. Otherwise, the microservices might not be deployed correctly.

  3. Deploy the regional microservices and wait for the process to complete:
    root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"

Checking the Status of the Microservices

To check the status of the microservices:

  1. Log in as root into the VM or server that hosts the central microservices.
  2. Run the following command with required region – central or regional.
    root@host:~/# kubectl get pods | grep -v Running –n <region>

    If the result is an empty display, as shown below, the microservices are running and you can proceed to the next section.

    root@host:~/# kubectl get pods | grep -v Running –n <region>

    If the display contains an item with the status CrashLoopBackOff or Terminating, a microservice is not running.

  3. Delete and restart the pod.
    root@host:~/# kubectl get pods

    The first item in the display shows the microservice and the second item shows its pod.

    root@host:~/# kubectl delete pods -l microservice=csp.nso-core –n <region>
  4. Wait a couple of minutes and then check the status of the microservice and its pod.
    root@host:~/#kubectl get pods –n <region>

Loading Data

After you check that the microservices are running, you must load data to import plug-ins and data design tools.

To load data:

  1. Ensure that all the microservices are up and running on the central and each regional microservices host.
  2. (Optional) Specify the value of the regional subnet in the /micro_services/data/inputs.yaml file on the installer VM. By default, the subnet address is the management address of the regional microservices host that you specify in the topology.conf file.
  3. Access the home directory of the installer VM.
  4. Execute the ./load_services_data.sh command.
    root@host:~/#./load_services_data.sh
Note

You must not execute load_services_data.sh more than once after a new deployment.

Performing a Health Check of Infrastructure Components

After you install or upgrade CSO, you can run the components_health.sh script to perform a health check of all infrastructure components. This script detects whether any infrastructure component has failed and displays the health status of the following infrastructure components:

  • Cassandra

  • Elasticsearch

  • Etcd

  • MariaDB

  • RabbitMQ

  • ZooKeeper

  • Redis

  • ArangoDb

  • SimCluster

  • ELK Logstash

  • ELK Kibana

  • Contrail Analytics

  • Keystone

  • Swift

  • Kubernetes

To check the status of infrastructure components:

  1. Log in to the installer VM as root.
  2. Navigate to the CSO directory in the installer VM.

    For example:

    root@host:~/# cd Contrail_Service_Orchestration_4.0.0
  3. Run the components_health.sh script.

    To check the status of infrastructure components of the central environment, run the following command:

    root@host:~/Contrail_Service_Orchestration_3.3#./components_health.sh central

    To check the health component of the regional environment, run the following command:

    root@host:~/Contrail_Service_Orchestration_3.3#./components_health.sh regional

    To check the health component of central and regional environments, run the following command:

    root@host:~/Contrail_Service_Orchestration_3.3# ./components_health.sh

    After a couple of minutes, the status of each infrastructure component for central and regional environments is displayed.

    For example: