Installing and Configuring the Cloud CPE Solution
You use the same installation process for both Contrail Service Orchestration and Network Service Controller and for both KVM and ESXi environments.
Before You Begin
Before you begin, provision the virtual machines (VMs) for the Contrail Service Orchestration node or server.
- Provision the virtual machines (VMs) for the Contrail Service Orchestration node or server (see Provisioning VMs on Contrail Service Orchestration Nodes or Servers).
- Copy the uncompressed installer package to the installer VM.
- Determine the following information:
- What type of software you want to install: Contrail Service
Orchestration (CSO) or Network Service Controller (NSC)
- Specify CSO if you purchased licenses for a centralized
deployment or both Network Service Orchestrator and Network Service
Controller licenses for a distributed deployment.
This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).
- Specify NSC if you purchased only Network Service Controller
licenses for a distributed deployment.
This option includes Administration Portal and Service and Infrastructure Monitor, but not the Designer Tools and Customer Portal.
- Specify CSO if you purchased licenses for a centralized
deployment or both Network Service Orchestrator and Network Service
Controller licenses for a distributed deployment.
- Whether you use a demo, trial, or production environment.
- Whether you use infrastructure services HA (only available for a production environment).
- The names of each region if you use more than one region.
The default specifies the central region and one other region, called regional.
- The IP address of the VM that hosts the installer.
- The timezone for the servers in the deployment, based
on the Ubuntu timezone guidelines.
The default value for this setting is the current timezone used for the installer host..
- The fully qualified domain name (FQDN) of each Network Time Protocol (NTP) server that the solution uses.
For networks within firewalls, use NTP servers specific to your network.
For example: ntp.example.net
- Whether you use HTTPS or HTTP to access microservices and GUIs in the deployment.
- A common password for all infrastructure services except the MariaDB administration and cluster. The default password is passw0rd.
- Whether you use the CSO Keystone or an external Keystone
for authentication of CSO operations.
- A CSO Keystone is installed with CSO and resides on the
central CSO server.
This default option is recommended for all deployments, and is required for a distributed deployment unless you provide your own external Keystone. Use of a CSO Keystone offers enhanced security because the Keystone is dedicated to CSO and is not shared with any other applications.
- An external Keystone, which resides on a different server
to the CSO server. You specify the IP address and access details for
the Keystone during the installation.
- The Contrail OpenStack Keystone in the Contrail Cloud
Platform for a centralized deployment is an example of an external
Keystone.
In this case, customers and Cloud CPE infrastructure components use the same Keystone token.
- You can also use your own external Keystone that is not part of the CSO or Contrail OpenStack installation.
- The Contrail OpenStack Keystone in the Contrail Cloud
Platform for a centralized deployment is an example of an external
Keystone.
- A CSO Keystone is installed with CSO and resides on the
central CSO server.
- Whether you use an external Contrail Analytics node for a centralized deployment. A distributed deployment always uses an external Contrail Analytics node.
- The CIDR address of the subnet on which the Contrail Service Orchestration VMs reside.
- If you use NAT with your Contrail Service Orchestration installation, the public IP addresses used for NAT for the central and regional regions.
- The following information for each server and VM in the
deployment:
- Management IP address in CIDR notation
For example: 192.0.2.1/24
- FQDN of each host
For example: central-infravm.example.net
- Password for the root user if host
- Management IP address in CIDR notation
- The Kubernetes overlay network address in Classless Interdomain Routing (CIDR) notation.
The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address with a /16 subnet.
- The range of the Kubernetes Service overlay network addresses,
in CIDR notation.
The microservices use this overlay network. The default value is 192.168.3.0/24.
- The IP address of the Kubernetes API server, which in
the service overlay network.
This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.
- The IP address of the Kubernetes Cluster Domain Name System (DNS)
This IP address must be in the range you specify for the Kubernetes Service overlay network. The default value is 192.168.3.1.
- The password for OpenStack Keystone.
- The service token Universal Unique Identifier (UUID) for OpenStack Keystone.
- The FQDN that the load balancer uses to access the installation.
- For a non-HA deployment, the IP address and the FQDN of the VM that hosts the HAproxy.
- For an HA deployment, the virtual IP address that you configure for the HAproxy.
- The number of copies of each microservice:
- For a non-HA deployment—1
- For an HA deployment—3
- What type of software you want to install: Contrail Service
Orchestration (CSO) or Network Service Controller (NSC)
Creating the Configuration Files
You use an interactive script to create configuration files for the environment topology. The installer uses these configuration files to customize the topology when you deploy the solution.
To run the installation tool:
- Log in as root to the host on which you deployed the installer.
- Access the directory for the installer. For example, if
the name of the installer directory is
cspVersion
:root@host:~/# cd /~/cspVersion/
- Run the setup tool:
root@host:~/cspVersion/# ./setup_assist.sh
The script starts, sets up the installer, and requests that you enter information about the installation.
- Specify the solution that you want to install:
- cso—Use this option unless you purchased only NSC licenses.
- nsc—Use this option if you purchased only NSC licenses.
- Specify the deployment environment:
- demo—Demonstration environment
- production—Production environment
- Specify whether the deployment uses high availability
(HA).
- y—deployment uses HA
- n—deployment does not use HA
- Press enter if you use only one region or specify a comma-separated list or regions if you use multiple regions.
- Specify the management IP address of the server or VM that hosts the installer file.
- Accept the default timezone or specify the Ubuntu timezone for the servers in the topology.
- Specify a comma-separated list of FQDN names of NTP servers.
- Specify whether the connection to the microservices and
GUIs should use HTTPS or HTTP.
- y—Use HTTPS connections for microservices and GUIs. Infrastructure services always use HTTP connections.
- n—Use HTTP for microservices, GUIs, and infrastructure services
- Specify a common password for all infrastructure services or accept the default, passw0rd.
- Specify whether you use an external Keystone to authenticate
Contrail Service Orchestration operations.
- n—Specifies use of the CSO Keystone which is installed
with and dedicated to CSO.
This default option is recommended for all deployments, and is required for a distributed deployment unless you provide your own external Keystone. Use of a CSO Keystone offers enhanced security because the Keystone is dedicated to CSO and is not shared with any other applications.
- y—Specifies use of an external OpenStack Keystone,
such as your own external Keystone or the Contrail OpenStack Keystone
in the Contrail Cloud Platform for a centralized deployment.
If you use the Contrail OpenStack Keystone for a centralized deployment, customers and Cloud CPE infrastructure components use the same Keystone token.
- n—Specifies use of the CSO Keystone which is installed
with and dedicated to CSO.
- If you use an external Keystone, specify the Keystone IP address and the OpenStack Keystone service token.
- Specify whether you use an external Contrail Analytics
server:
- For a distributed or combined deployment, you use an external Contrail Analytics server.
- For a centralized deployment, you usually use Contrail Analytics on the Contrail Controller in the Contrail Cloud Platform, rather than an external server.
- Specify the subnet in CIDR notation on which the Contrail
Service Orchestration VMs reside.
The script requires this input, but does not use the value for deployments without an NFX250 CPE device.
- Specify whether you use NAT with your Contrail Service Orchestration installation. If you use NAT, provide the public IP addresses of the central region and the regional region that are used in the NAT configuration.
- Review the list of interfaces for Salt communications. Update the list only if you have modified the interfaces on the servers or VMs.
- Starting with the central region, specify the following
information for each server and VM in the deployment.
The script prompts you for each set of information that you must enter.
- Management IP address with CIDR
For example: 192.0.2.1/24
- FQDN of each host
For example: central-infravm.example.net
- Password for the root user
- Management IP address with CIDR
- Configure settings for each region in the deployment:
- Specify the IP address and prefix of the Kubernetes overlay network that the microservices use.
- Specify the fully-qualified domain names of the host for
the load balancer.
- For a non-HA deployment, the IP address or FQDN of the VM that hosts the HAproxy.
- For an HA deployment, the virtual IP address that you configure for the HAproxy.
- Specify a unique virtual router identifier in the range
0–255 for the HA Proxy VM in each region.
Note: Use a different number for this setting in each region.
- Specify the number of instances of microservices:
- For non-HA installations, specify 1.
- For HA installations, specify 3.
The tool uses the input data to configure each region and indicates when the configuration stage is complete.
- When all regions are configured, the tool starts displaying
the deployment commands.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"
Deploying Infrastructure Services
To deploy infrastructure services:
- Log in as root to the host on which you deployed the installer.
- Deploy the central infrastructure services and wait at
least two minutes before you execute the next command..
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
Caution: Wait at least two minutes before executing the next command. Otherwise, the microservices may not be deployed correctly.
- Deploy the regional infrastructure services and wait for
the process to complete.
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
Deploying Microservices
To deploy the microservices:
- Log in as root to the host on which you deployed the installer.
- Deploy the central microservices and wait at least two
minutes before you execute the next command.
root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
Caution: Wait at least two minutes before executing the next command. Otherwise, the microservices may not be deployed correctly.
- Deploy the regional microservices and wait for the process
to complete:
root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"
- Delete existing pods and create new pods.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh - -restart_containers"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh - -restart_containers"
- Clear the databases.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh - -reset_databases"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh - -reset_databases"
- Clear the Kubernetes cluster.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh - -reset_cluster"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh - -reset_cluster"
Configuring Access to Administration Portal for a Centralized Deployment That Uses Contrail OpenStack for Authentication
The default installation uses a dedicated OpenStack instance to authenticate Contrail Service Orchestration operations. You can also use Contrail OpenStack on the Contrail Cloud Platform to authenticate Contrail Service Orchestration operations in a centralized deployment. If you do so, you must configure access for the default user to Administration Portal.
To configure access to Administration Portal:
- Log in as root to one of the Contrail Controller nodes.
- Set the source path, using the path that you configured
during the installation. For example:
root@host:~/# source /etc/contrail/keystonerc
- Run the following commands:
Note: In the example below, contrailadmin_password represents the actual password that you specified for the setting admin_password in the keystone section of the
roles.conf
file.root@host:~/# keystone user-create --name="cspadmin" --pass="contrailadmin_password" --tenant="admin"
root@host:~/# keystone tenant-create --name="default-project" --description="Default Tenant"
root@host:~/# keystone user-role-add --tenant default-project --user admin --role admin
root@host:~/# keystone user-role-add --tenant default-project --user cspadmin --role admin
Checking the Status of the Microservices
To check the status of the microservices:
- Log in as root into the VM or server that hosts the central microservices.
- Run the following command.
root@host:~/# kubectl get pods | grep -v Running
if the result is an empty display, as shown below, the microservices are running and you can proceed to the next section.
root@host:~/# kubectl get pods | grep -v Running
NAME READY STATUS RESTARTS AGE
If the display contains an item with the status CrashLoopBackOff or Terminating, a microservice is not running.
- Delete and restart the pod.
root@host:~/# kubectl get pods
NAME READY STATUS RESTARTS AGE csp.ams-3909406435-4yb0l 1/1 CrashLoopBackOff 0 8m csp.nso-core-3445362165-s55x8 0/1 Running 0 8m
The first item in the display shows the microservice and the second item shows its pod.
root@host:~/# kubectl delete pods -l microservice=csp.nso-core
- Wait a couple of minutes, then
check the status of the microservice and its pod.
root@host:~/#kubectl get pods
NAME READY STATUS RESTARTS AGE csp.ams-4890899323-3dfd02 1/1 Running 0 1m csp.nso-core-09009278633-fr234f 0/1 Running 0 1m
- Repeat Steps 1 through 4 for the regional microservices.
Loading Data
After you check that the microservices are running, you must load data to import plug-ins and data design tools.
To load data:
- Ensure that all the microservices are up and running on the central and each regional microservices host.
- (Optional) Specify the value of the regional subnet in
the file
/micro_services/data/inputs.yaml
on the regional microservices host.By default, the subnet address is the management address of the regional microservices host that you specify in the
topology.conf
file. - Log in to installer VM as root.
- Execute the ./load_services_data.sh command.
root@host:~/#./load_services_data.sh
![]() | Note:
You must not execute |
Creating Firewall Rules
To open the required ports and set the required rules for Contrail Orchestration and Network Service Controller on the firewall for a host:
- Log in to the installer VM as root.
- Run the firewall rules script.
root@host:~/#./create_firewall_rules.sh