Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Installing and Configuring Contrail Service Orchestration

    You use the same installation process for both Contrail Service Orchestration (CSO) and Network Service Controller and for both KVM and ESXi environments.

    Before You Begin

    Before you begin:

    • Provision the virtual machines (VMs) for the CSO node or server. (See Provisioning VMs on Contrail Service Orchestration Nodes or Servers).

    • Copy the installer package to the installer VM and expand it. (See Setting up the Installation Package and Library Access)

    • If you use an external server rather than the installer VM for the private repository that contains the libraries for the installation, create the repository on the server. (See Setting up the Installation Package and Library Access).

      The installation process uses a private repository so that you do not need Internet access during the installation.

    • Determine the following information:

      • The type of deployment environment: Demo or production

      • Whether you use HA.

      • The IP address of the VM that hosts the installer.

      • The timezone for the servers in the deployment, based on the Ubuntu timezone guidelines.

        The default value for this setting is the current timezone of the installer host.

      • The fully qualified domain name (FQDN) of each Network Time Protocol (NTP) server that the solution uses. For networks within firewalls, use NTP servers specific to your network.

        For example: ntp.example.net

      • The common password for all infrastructure services except the MariaDB administrator and cluster. The default password is passw0rd.

      • If you want to access Administration Portal with the single sign-on method, the name of the public domain in which the CSO servers reside. Alternatively if you want to access Administration Portal with local authentication, you need a dummy domain name.

      • For a distributed deployment, whether you use transport layer security (TLS) to encrypt data that passes between the CPE device and CSO.

        You should use TLS unless you have an explicit reason for not encrypting data between the CPE device and CSO.

      • Whether you use the CSO Keystone or an external Keystone for authentication of CSO operations.

        • A CSO Keystone is installed with CSO and resides on the central CSO server.

          This default option is recommended for all deployments, and is required for a distributed deployment. Use of a CSO Keystone offers enhanced security because the Keystone is dedicated to CSO and is not shared with any other applications.

        • An external Keystone resides on a different server to the CSO server and is not installed with CSO.

          You specify the IP address and access details for the Keystone during the installation.

          • The Contrail OpenStack Keystone in the Contrail Cloud Platform for a centralized deployment is an example of an external Keystone.

            In this case, customers and Contrail Service Orchestration infrastructure components use the same Keystone token.

          • You can also use your own external Keystone that is not part of the CSO or Contrail OpenStack installation.

      • If you use an external Keystone, the username and service token.

      • The IP address of the Contrail controller node for a centralized deployment. For a centralized deployment, you specify this external server for Contrail Analytics.

      • Whether you use a common password for all VMs or a different password for each VM, and the value of each password.

      • The CIDR address of the subnet on which the CSO VMs reside.

      • If you use NAT with your CSO installation, the public IP addresses used for NAT for the central and regional regions.

      • The primary interface for all VMs.

        The default is eth0.

      • The following information for each server and VM in the deployment:

        • Management IP address in CIDR notation

          For example: 192.0.2.1/24

        • FQDN of each host

          For example: central-infravm.example.net

        • Password for the root user

          If you use the same password for all the VMs, you can enter the password once. Otherwise, you must provide the password for each VM.

      • For the microservices in the central and each regional region:

        • The IP address of the Kubernetes overlay network address in Classless Interdomain Routing (CIDR) notation.

          The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address with a /16 subnet.

        • The range of the Kubernetes service overlay network addresses, in CIDR notation.

          The default value is 192.168.3.0/24.

        • The IP address of the Kubernetes service API server, which is on the service overlay network.

          This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

        • The IP address of the Kubernetes Cluster Domain Name System (DNS)

          This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

        • The tunnel interface unit range that CSO uses for an SD-WAN implementation with an MX Series hub device.

          You must choose values that are different to those that you configured for the MX Series router. The possible range of values is 0–16385, and the default range is 4000–6000.

        • The FQDN that the load balancer uses to access the installation.

          • For a non-HA deployment, the IP address and the FQDN of the VM that hosts the HAproxy.

          • For an HA deployment, the virtual IP address and the associated hostname that you configure for the HAproxy.

        • The required number of copies of each microservice.

          • For a deployment without HA —1

          • For a demo deployment with HA—2

          • For a production deployment with HA—3

    Creating the Configuration Files

    You use an interactive script to create configuration files for the environment topology. The installer uses these configuration files to customize the topology when you deploy the solution.

    To run the installation tool:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the directory for the installer. For example, if the name of the installer directory is csoVersion:
      root@host:~/# cd /~/csoVersion/
    3. Run the setup tool:
      root@host:~/cspVersion/# ./setup_assist.sh

      The script starts, sets up the installer, and requests that you enter information about the installation.

    4. Specify whether you use an external private repository and if so, specify the IP address of the repository.
    5. Specify the deployment environment:
      • demo—Demonstration environment

      • production—Production environment

    6. Specify whether the deployment uses high availability (HA).
      • y—Deployment uses HA

      • n—Deployment does not use HA

    7. Specify the management IP address of VM that hosts the installer file.
    8. Accept the default timezone or specify the Ubuntu timezone for the servers in the topology.
    9. Specify a comma-separated list of FQDN names of NTP servers.

      For example: ntp.example.net

    10. Specify a common password for all infrastructure services or accept the default, passw0rd.
    11. Specify a domain name to determine how you access Administration Portal, the main CSO GUI:
      • If you want to access Administration Portal with the single sign-on method, specify the name of the public domain in which the CSO servers reside.

        For example: organization.com, where organization is the name of your organization.

      • If you want to use local authentication for Administration portal, you specify a dummy name.

        For example: example.net

    12. For a distributed deployment, specify whether you use TLS to enable secure communication between the CPE device and CSO.

      Accept the default unless you have an explicit reason for not using encryption for communications between the CPE device and CSO.

      • n—Specifies that TLS is not used.

      • y—Specifies use of TLS. This is the default setting.

    13. Specify whether you use an external Keystone to authenticate CSO operations, and if so, specify the OpenStack Keystone service token.
      • n—Specifies use of the CSO Keystone which is installed with and dedicated to CSO. This default option is recommended unless you have a specific requirement for an external Keystone.

      • y—Specifies use of an external OpenStack Keystone, such as a Keystone specific to your network. Select the IP address and access details for the Keystone.

    14. Specify whether you use an external Contrail Analytics server:
      • y—Specifies use of Contrail Analytics in Contrail OpenStack for a centralized or combined deployment.

        You must provide the IP address of the Contrail controller node.

      • n—Specifies use of the Contrail Analytics VM for a distributed deployment.

    15. Specify whether you use a common password for all CSO VMs, and if so, specify the password.
    16. Specify the subnet in CIDR notation on which the CSO VMs reside.

      The script requires this input, but uses the value only for distributed deployments and not for centralized deployments.

    17. Specify whether CSO is behind Network Address Translation (NAT).
      • y—CSO is behind NAT

      • n—CSO is not behind NAT (default)

    18. Accept of specify the primary interface for all VMs.

      The default is eth0. Accept this value unless you have explicitly changed the primary interface on your host of VMs.

    19. Starting with the central region, specify the following information for each server and VM in the deployment.

      The script prompts you for each set of information that you must enter.

      • Management IP address with CIDR

        For example: 192.0.2.1/24

      • Password for the root user (only required if you use different passwords for each VM)

      • The IP address of the Kubernetes overlay network address, in CIDR notation, that the microservices use.

        The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address with a /16 subnet.

      • The range of the Kubernetes service overlay network addresses, in CIDR notation.

        The default value is 192.168.3.0/24. It is unlikely that there will be a conflict between this default and your network, so you can usually accept the default. If, however, there is a conflict with your network, use a similar address with a /24 subnet.

      • The IP address of the Kubernetes service API server, which is on the service overlay network.

        This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

      • The IP address of the Kubernetes Cluster DNS server.

        This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.

      • Specify the range of tunnel interface units that CSO uses for an SD-WAN implementation with an MX Series hub device

        The default setting is 4000–6000. You specify values in the range 0–16385 that are different to those that you configured on the MX Series router.

      • The IP address and FQDN of the host for the load balancer:

        • For non-HA deployments, the IP address and FQDN of the VM that hosts the HAproxy.

        • For HA deployments, the virtual IP address and associated FQDN that you configure for the HAproxy.

      • The number of instances of microservices:

        • For deployments without HA, specify 1.

        • For a demo HA deployment with HA, specify 2.

        • For a production HA deployment with HA, specify 3.

      The tool uses the input data to configure each region and indicates when the configuration stage is complete.

    20. When all regions are configured, the tool starts displaying the deployment commands.
      root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
      root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
      root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
      root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"

    Deploying Infrastructure Services

    To deploy infrastructure services:

    1. Log in as root to the host on which you deployed the installer.
    2. Deploy the central infrastructure services and wait at least ten minutes before you execute the next command.
      root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"

      Caution: Wait at least ten minutes before executing the next command. Otherwise, the microservices may not be deployed correctly.

    3. Deploy the regional infrastructure services and wait for the process to complete.
      root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"

    Deploying Microservices

    To deploy the microservices:

    1. Log in as root to the host on which you deployed the installer.
    2. Deploy the central microservices and wait at least ten minutes before you execute the next command.
      root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"

      Caution: Wait at least ten minutes before executing the next command. Otherwise, the microservices may not be deployed correctly.

    3. Deploy the regional microservices and wait for the process to complete:
      root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"

    Checking the Status of the Microservices

    To check the status of the microservices:

    1. Log in as root into the VM or server that hosts the central microservices.
    2. Run the following command.
      root@host:~/# kubectl get pods | grep -v Running

      if the result is an empty display, as shown below, the microservices are running and you can proceed to the next section.

      root@host:~/# kubectl get pods | grep -v Running
      NAME                               READY   STATUS            RESTARTS AGE
      

      If the display contains an item with the status CrashLoopBackOff or Terminating, a microservice is not running.

    3. Delete and restart the pod.
      root@host:~/# kubectl get pods
       NAME                              READY   STATUS              RESTARTS  AGE
       csp.ams-3909406435-4yb0l          1/1     CrashLoopBackOff    0         8m
       csp.nso-core-3445362165-s55x8     0/1     Running             0         8m
      

      The first item in the display shows the microservice and the second item shows its pod.

      root@host:~/# kubectl delete pods -l microservice=csp.nso-core
    4. Wait a couple of minutes, then check the status of the microservice and its pod.
      root@host:~/#kubectl get pods
       NAME                                     READY    STATUS      RESTARTS   AGE
       csp.ams-4890899323-3dfd02                1/1      Running     0          1m
       csp.nso-core-09009278633-fr234f          0/1      Running     0          1m
      
    5. Repeat Steps 1 through 4 for the regional microservices.

    Loading Data

    After you check that the microservices are running, you must load data to import plug-ins and data design tools.

    To load data:

    1. Ensure that all the microservices are up and running on the central and each regional microservices host.
    2. Access the home directory of the installer VM.
    3. Execute the ./load_services_data.sh command.
      root@host:~/#./load_services_data.sh

    Note: You must not execute load_services_data.sh more than once after a new deployment.

    Modified: 2018-05-16