Create and Configure Servers and Clusters
Follow these steps to create and configure servers and clusters.
- Login to Contrail Command (https://contrail-command-server-ip-address:9091). In this example, we use the workstation server to connect
to the Command server over the fabric management subnet using the
IP address 10.1.1.1
Use the username and password specified in the command_servers.yaml file in the keystone stanza.
Navigate to Infrastructure > Servers to define the root password for the servers and compute nodes that make up the cluster.
From the Credentials tab, click Create.
Enter a name for the credential.
Enter the SSH password for the servers (cluster/insights).
Enter the SSH password again to confirm.
- Navigate to Infrastructure > Servers and click Add to add servers.
Select Detailed and enter required information as described in Table 1.
Table 1: Create Server Page
Select a mode. Mode indicates the amount of information you can provide to add the server.
Use the Express mode to provide minimum information, mode Detailed if you can provide more information such as disk partition, or Bulk Import mode, if you want to add servers by uploading a .csv file.
Enter the host name of the server.
Enter the IP address of the management interface of the physical server or VM.
Enter the name of the management network-facing interface on the server (or VM).
Select the credentials for the server (or VM).
- Click Add under Network Interfaces and add the information about the interfaces on each server.
In this example, we are adding the interfaces for the Controller. Interface ens192 is used for external management access, ens224 for fabric management, and ens256 is connected to the fabric to support overlay peering and compute node services like DHCP.
- Repeat this process to add each server required for the
Contrail Networking installation (Contrail Cluster, Insights, Insights
Flows, and both Compute nodes), and click Create to save
your work. Refer Topology for the topology diagram and IP address assignment information for
the details needed to define all servers.
The created servers are listed under the servers tab.
- Click Next to proceed with cluster creation.
- Enter the information described in Table 2. Expand the Contrail Configuration section and click +Add to add the desired keys and values.
Table 2: Create cluster
Choose Provisioning Manager
Specify whether you are setting up Contrail Cloud or Contrail Networking.
Enter a name for the cluster.
Enter the path to the container registry, hub.juniper.net/contrail, where you can obtain the Contrail networking software image.
Container Registry Username
Enter the container registry user name. If you do not have the registry username and password, send an email to email@example.com to receive the registry username and password.
Container Registry Password
Enter the container registry password.
Enter the version of Contrail Networking that you are installing. The specified version of the image is obtained from the repository. For this example, the version is 2008.
Enter Domain name for the cluster.
Enter the IP address or name of the NTP server. This server should be reachable from either the external or fabric management subnets.
Default vRouter Gateway
Enter the IP address of the default vRouter gateway. This is the IP address of the interface (physical or IRB) on the leaf device that is connected to the server’s fabric-facing interface.
Select VLAN,MPLSoUDP,MPLSoGRE from the Encapsulation Priority drop down list.
Be sure to enable the fabric management option to support greenfield or brownfield fabric onboarding.
Click +Add to enter the following keys and values:
Key: CONTROL_NODES, Value: IP address of the control node
Key: PHYSICAL_INTERFACE, Value: interface name
Key: TSN_NODES, Value: IP Address of the Contrail service node. In this example, services are provided by the control node, so the controller’s IP address is specified.
Key: API__DEFAULTS__enable_latency_stats_log, Value: TRUE
Key: API__DEFAULTS__enable_api_stats_log, Value: TRUE
The API defaults keys are optional. They enable the Config Nodes Response Size and Time graph in the Infrastructure >Cluster screen.
- Click Next button to proceed to control node assignment.
- Click the arrow next to the Cluster server to assign the Cluster server as the Control Node and click the Next button to proceed to the Orchestrator Nodes page.
- Click the arrow next to the Cluster server to assign the
Cluster server as an OpenStack node. Select the Show Advanced check box.
If your compute node is a VM that runs on an ESXI hypervisor, then any VMs you provision on the compute VM will be nested (a VM running on a VM). To support a nested VM in an ESXI environment you must add the nova-related configuration shown in the Customize configuration section in Figure 2 to support the required QEMU hypervisor.
- Click +Add under Kolla Globals and specify the below keys and values.
- Be sure to specify the Kolla keys and values listed in Table 3. Click the Next button to
proceed to the compute node screen when done.
Table 3: Add Kolla Globals
Keys and Values
Enter the following keys and values:
- Assign both compute nodes along with their vRouter gateways.
The specified gateway IP address should match the address assigned
to the leaf device interface (physical or IRB) that the compute node
servers are connected to. Figure 3 shows
the first compute, which is attached to leaf 1, being assigned. Repeat
this step for the second compute node, being sure to specify the correct
vRouter IP for its attachment to leaf 2, i.e., 10.1.13.254.
Table 4: Assigned Compute nodes
Default vRouter Gateway
Enter the IP address of the default vRouter gateway. This should be 10.1.12.254 and 10.1.13.254 for compute nodes 1 and 2, respectively.
- When done, click the Next button to proceed to the service node assignment screen. For this example, you assign the service node functionality to the controller node. When done, click the Next button to proceed to the Contrail Insights configuration screen.
- Assign the Insights server as an AppFormix_Platform_Node and the Contrail Insights Flows server as an appformix_bare_host_node.
- Click the Next button to configure the Insights Flows node. Assign this role to the Flow server node, and specify an unused IP address from the fabric management subnet to be used as the Virtual IP (VIP) of the flow server.
- Click the Next button to proceed to the cluster overview screen, which summarizes the information provided in the cluster provisioning wizard.
- Verify that the information shown in the Cluster
overview page is accurate and then click Provision. The cluster provisioning screen is displayed.
You can use the commands highlighted in the provisioning screen on the Contrail Command server to get summary information about the provisioning process. For a more detailed view, you can display the logs from the Ansible player on the Contrail Command server. To do this, you must first display the ID assigned to the Ansible player by running the following command:
[root@Contrail_Command ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 910832e16602 hub.juniper.net/contrail-nightly/contrail-kolla-ansible-deployer:2008.121 "/bin/bash" 3 weeks ago Up 3 weeks ansible-player_20200921164826 ...
Use the process ID obtained from the docker ps command to get detailed Ansible playbook logs as shown.
[root@Contrail_Command ~]# docker exec -it ansible-player_20200921164826 tail -f /var/log/ansible.log
If there is an error in the provisioning process due to time-outs, registry download limits, or other network related issues, you can click the Reprovision button to retry the installation. If this doesn't resolve the issue, you might need to delete the cluster and start over with the cluster provisioning wizard.
- When the installation process completes without error, you can click the Proceed to login button to login to the new cluster.
- Run the following command on the cluster node to verify
that the installation is successful.
[root@ix-cn-cluster-01 ~]# contrail-status Pod Service Original Name Original Version State Id Status redis contrail-external-redis 2008.121 running ae0a74e89b50 Up 2 hours rsyslogd 2008.121 running bc2edfb09e68 Up 2 hours analytics api contrail-analytics-api 2008.121 running 33b29eb00c4f Up About an hour ... webui web contrail-controller-webui-web 2008.121 running ba86a599cc6f Up 2 hours WARNING: container with original name '' have Pod or Service empty. Pod: '' / Service: 'rsyslogd'. Please pass NODE_TYPE with pod name to container's env vrouter kernel module is PRESENT == Contrail control == control: active nodemgr: active named: active dns: active == Contrail analytics-alarm == nodemgr: active kafka: active alarm-gen: active == Contrail database == nodemgr: active query-engine: active cassandra: active == Contrail analytics == nodemgr: active api: active collector: active == Contrail config-database == nodemgr: active zookeeper: active rabbitmq: active cassandra: active == Contrail webui == web: active job: active == Contrail vrouter == nodemgr: active agent: active == Contrail analytics-snmp == snmp-collector: active nodemgr: active topology: active == Contrail config == svc-monitor: active nodemgr: active device-manager: active api: active schema: active
- You can also run the contrail-status command
on both compute nodes.
[root@ix-cn-cmpt-01 ~]# contrail-status Pod Service Original Name Original Version State Id Status rsyslogd 2008.121 running b4102195e780 Up 2 hours vrouter agent contrail-vrouter-agent 2008.121 running 1cb2c63bdf8b Up 2 hours vrouter nodemgr contrail-nodemgr 2008.121 running 2193de447863 Up 2 hours vrouter provisioner contrail-provisioner 2008.121 running cf264a6b3721 Up 2 hours WARNING: container with original name '' have Pod or Service empty. Pod: '' / Service: 'rsyslogd'. Please pass NODE_TYPE with pod name to container's env vrouter kernel module is PRESENT == Contrail vrouter == nodemgr: active agent: active