Installing and Configuring the Cloud CPE Solution
Before You Begin
Before you begin, provision the virtual machines (VMs) for the Contrail Service Orchestration node or server.
- Provision the virtual machines (VMs) for the Contrail Service Orchestration node or server (see Provisioning VMs on Contrail Service Orchestration Nodes or Servers).
- Copy the installer package to the installer VM.
- Determine the following information:
- What type of solution you want to install: CSO or NSC
- Specify CSO if you purchased licenses for a centralized
deployment or both Network Service Orchestrator and Network Service
Controller licenses for a distributed deployment.
This option includes all the Contrail Service Orchestration graphical user interfaces (GUIs).
- Specify NSC if you purchased only Network Service Controller
licenses for a distributed deployment.
This option includes Administration Portal and Service and Infrastructure Monitor, but not Network Service Designer and Customer Portal.
- Specify CSO if you purchased licenses for a centralized
deployment or both Network Service Orchestrator and Network Service
Controller licenses for a distributed deployment.
- Whether you use a demo or production environment.
- Whether you use infrastructure services HA (only available for a production environment).
- The names of each region if you use more than one region.
The default specifies the central region and one other region, called regional.
- The IP address of the VM that hosts the installer.
- The timezone for the servers in the deployment, based
on the Ubuntu timezone guidelines.
The default value for this setting is the current timezone used for the installer host..
- The fully qualified domain name (FQDN) of each Network Time Protocol (NTP) server that the solution uses.
For networks within firewalls, use NTP servers specific to your network.
For example: ntp.example.net
- Whether you use HTTPS or HTTP to access microservices and GUIs in the deployment.
- Whether you use a dedicated (external) OpenStack Keystone or the Contrail OpenStack Keystone for a centralized deployment. A distributed deployment always uses an external OpenStack Keystone.
- Whether you use an external Contrail Analytics node for centralized deployment. A distributed deployment always uses an external Contrail Analytics node.
- The static route to the NFX250 OAM network with CIDR for a distributed deployment that uses an NFX250 device..
- The following information for each server and VM in the
deployment:
- Management IP address in CIDR notation
For example: 192.0.2.1/24
- FQDN of each host
For example: central-infravm.example.net
- Password for the root user if host
- Management IP address in CIDR notation
- The Kubernetes overlay network address in Classless Interdomain Routing (CIDR) notation.
The default value is 172.16.0.0/16. If this value is close to your network range, use a similar address.
- The password for OpenStack Keystone.
- The service token Universal Unique Identifier (UUID) for OpenStack Keystone.
- The FQDN that the load balancer uses to access the installation.
- For a non-HA deployment, the IP address or FQDN of the VM that hosts the HAproxy.
- For an HA deployment, the virtual IP address that you configure for the HAproxy.
- What type of solution you want to install: CSO or NSC
Creating the Configuration Files
You use an interactive script to create configuration files for the environment topology. The installer uses these configuration files to customize the topology when you deploy the solution.
To run the installation tool:
- Log in as root to the host on which you deployed the installer.
- Access the directory for the installer. For example, if
the name of the installer directory is
cspVersion
:root@host:~/# cd /~/cspVersion/
- Run the setup tool:
root@host:~/cspVersion/# ./setup_assist.sh
The script starts, sets up the installer, and requests that you enter information about the installation.
- Specify the solution that you want to install:
- cso—Use this option unless you purchased only NSC licenses.
- nsc—Use this option if you purchased only NSC licenses.
- Specify the deployment environment:
- demo—Demonstration environment
- production—Production environment
- Specify whether the deployment uses high availability
(HA).
- y—deployment uses HA
- n—deployment does not use HA
- Press enter if you use only one region or specify a comma-separated list or regions if you use multiple regions.
- Specify the management IP address of the server or VM that hosts the installer file.
- Accept the default timezone or specify the Ubuntu timezone for the servers in the topology.
- Specify a comma-separated list of FQDN names of NTP servers.
- Specify whether the connection to the microservices and
GUIs should use HTTPS or HTTP..
- y—Use HTTPS connections for microservices and GUIs. Infrastructure services always use HTTP conenections.
- n—Use HTTP for microservices, GUIs, and infrastructure services
- Specify whether you use a dedicated OpenStack Keystone
to authenticate Contrail Service Orchestration operations.
- y—Specifies use of a dedicated OpenStack Keystone. Select this option for a distributed deployment, a combined deployment, or a centralized deployment that uses a dedicated OpenStack Keystone.
- n—Specifies use of the Contrail OpenStack Keystone for a centralized deployment.
- Specify the OpenStack Keystone password.
- Specify the OpenStack Keystone service token.
- Specify whether you use an external Contrail Analytics
server:
- For a distributed or combined deployment, you use an external Contrail Analytics server.
- For a centralized deployment, you usually use Contrail Analytics on the Contrail Controller in the Contrail Cloud Platform, rather than an external server.
- Specify the static route the NFX OAM network for a distributed
deployment that uses the NFX250 device or specify any static route
for other deployments.
The script requires this input, but does not use the value for deployments without an NFX250 CPE device.
- Starting with the central region, specify the following
information for each server and VM in the deployment.
The script prompts you for each set of information that you must enter.
- Management IP address with CIDR
For example: 192.0.2.1/24
- FQDN of each host
For example: central-infravm.example.net
- Password for the root user
- Management IP address with CIDR
- Configure settings for each region in the deployment:
- Specify the IP address and prefix of the Kubernetes overlay network that the microservices use.
- Specify the fully-qualified domain names of the host for
the load balancer.
- For a non-HA deployment, the IP address or FQDN of the VM that hosts the HAproxy.
- For an HA deployment, the virtual IP address that you configure for the HAproxy.
- Specify a unique virtual router identifier in the range
0–255 for the HA Proxy VM in each region..
Note: Use a different number for this setting in each region.
- Specify the number of instances of microservices:
- For non-HA installations, specify 1.
- For HA installations, specify 2.
The tool uses the input data to configure each region and indicates when the configuration stage is complete.
- When all regions are configured, the tool starts displaying
the deployment commands.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"
- Press Enter to load the services data.
root@host:~/# ./load_services_data.sh
The tool loads the data and completes the installation.
Deploying Infrastructure Services
To deploy infrastructure services:
- Log in as root to the host on which you deployed the installer.
- Deploy the central infrastructure services and wait for
the process to complete.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
- Deploy the regional infrastructure services and wait for
the process to complete.
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
Deploying Microservices
To deploy the microservices:
- Log in as root to the host on which you deployed the installer.
- Deploy the central microservices and wait for the process
to complete:
root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
- Deploy the regional microservices and wait for the process
to complete:
root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"
- Delete existing pods and create new
pods.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh -restart_containers"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh -restart_containers"
- Reset the Kubernetes cluster and clear
the database.
root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh -reset_cluster"
root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh -reset_cluster"
Configuring Access to Administration Portal for a Centralized Deployment That Uses Contrail OpenStack for Authentication
The default installation uses a dedicated OpenStack instance to authenticate Contrail Service Orchestration operations. You can also use Contrail OpenStack on the Contrail Cloud Platform to authenticate Contrail Service Orchestration operations in a centralized deployment. If you do so, you must configure access for the default user to Administration Portal.
To configure access to Administration Portal:
- Login as root to one of the Contrail Controller nodes.
- Set the source path, using the path that you configured
during the installation. For example:
root@host:~/# source /etc/contrail/keystonerc
- Run the following commands:
Note: in the example below, contrailadmin_password represents the actual password that you specified ifor the setting admin_password in the keystone section of the
roles.conf
file.root@host:~/# keystone user-create --name="cspadmin" --pass="contrailadmin_password" --tenant="admin"
root@host:~/# keystone tenant-create --name="default-project" --description="Default Tenant"
root@host:~/# keystone user-role-add --tenant default-project --user admin --role admin
root@host:~/# keystone user-role-add --tenant default-project --user cspadmin --role admin
Checking the Status of the Microservices
To check the status of the microservices:
- Login as root into the VM or server that hosts the central microservices.
- Run the following command.
root@host:~/# kubectl get pods | grep -v Running
if the result is an empty display, as shown below, the microservices are running and you can proceed to the next section, .
root@host:~/# kubectl get pods | grep -v Running
NAME READY STATUS RESTARTS AGE
If the display contains an item with the status CrashLoopBackOff or Terminating, a microservice is not running.
- Delete and restart the pod.
root@host:~/# kubectl get pods
NAME READY STATUS RESTARTS AGE csp.ams-3909406435-4yb0l 1/1 CrashLoopBackOff 0 8m csp.nso-core-3445362165-s55x8 0/1 Running 0 8m
The first item in the display shows the microservice and the second item shows its pod.
root@host:~/# kubectl delete pods -l microservice=csp.nso-core
- Wait a couple of minutes, then
check the status of the microservice and its pod.
root@host:~/#kubectl get pods
NAME READY STATUS RESTARTS AGE csp.ams-4890899323-3dfd02 1/1 Running 0 1m csp.nso-core-09009278633-fr234f 0/1 Running 0 1m
- Repeat Steps 1 through 4 for the regional microservices.
Loading Data
After you check that the microservices are running, you must load data to import plug-ins and data design tools.
To load data:
- Ensure that all the microservices are up and running.
- (Optional) Specify the value of the regional subnet in
the file
/micro_services/data/inputs.yaml
on the regional microservices host.By default, the subnet address is the management address of the regional microservices host that you specify in the
topology.conf
file. - Execute the # cd # ./load_services_data.sh command.
![]() | Note:
You must not execute |