Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Create and Configure Servers and Clusters

 

Follow these steps to create and configure servers and clusters.

  1. Login to Contrail Command (https://contrail-command-server-ip-address:9091). In this example, we use the workstation server to connect to the Command server over the fabric management subnet using the IP address 10.1.1.1

    Use the username and password specified in the command_servers.yaml file in the keystone stanza.

    Navigate to Infrastructure > Servers to define the root password for the servers and compute nodes that make up the cluster.

    1. From the Credentials tab, click Create.

    2. Enter a name for the credential.

    3. Enter the SSH password for the servers (cluster/insights).

    4. Enter the SSH password again to confirm.

    5. Click Create.

  2. Navigate to Infrastructure > Servers and click Add to add servers.

    Select Detailed and enter required information as described in Table 1.

    Table 1: Create Server Page

    Field

    Description

    Choose Mode

    Select a mode. Mode indicates the amount of information you can provide to add the server.

    Use the Express mode to provide minimum information, mode Detailed if you can provide more information such as disk partition, or Bulk Import mode, if you want to add servers by uploading a .csv file.

    Hostname

    Enter the host name of the server.

    Management IP

    Enter the IP address of the management interface of the physical server or VM.

    Management Interface

    Enter the name of the management network-facing interface on the server (or VM).

    Credentials

    Select the credentials for the server (or VM).

  3. Click Add under Network Interfaces and add the information about the interfaces on each server.

    In this example, we are adding the interfaces for the Controller. Interface ens192 is used for external management access, ens224 for fabric management, and ens256 is connected to the fabric to support overlay peering and compute node services like DHCP.

    Figure 1: Add Network Interfaces Information
    Add Network Interfaces Information
  4. Repeat this process to add each server required for the Contrail Networking installation (Contrail Cluster, Insights, Insights Flows, and both Compute nodes), and click Create to save your work. Refer Topology for the topology diagram and IP address assignment information for the details needed to define all servers.

    The created servers are listed under the servers tab.

  5. Click Next to proceed with cluster creation.
  6. Enter the information described in Table 2. Expand the Contrail Configuration section and click +Add to add the desired keys and values.

    Table 2: Create cluster

    Field

    Description

    Choose Provisioning Manager

    Specify whether you are setting up Contrail Cloud or Contrail Networking.

    Cluster Name

    Enter a name for the cluster.

    Container Registry

    Enter the path to the container registry, hub.juniper.net/contrail, where you can obtain the Contrail networking software image.

    Container Registry Username

    Enter the container registry user name. If you do not have the registry username and password, send an email to contrail-registry@juniper.net to receive the registry username and password.

    Container Registry Password

    Enter the container registry password.

    Contrail Version

    Enter the version of Contrail Networking that you are installing. The specified version of the image is obtained from the repository. For this example, the version is 2008.

    Domain Suffix

    Enter Domain name for the cluster.

    NTP Server

    Enter the IP address or name of the NTP server. This server should be reachable from either the external or fabric management subnets.

    Default vRouter Gateway

    Enter the IP address of the default vRouter gateway. This is the IP address of the interface (physical or IRB) on the leaf device that is connected to the server’s fabric-facing interface.

    Encapsulation Priority

    Select VLAN,MPLSoUDP,MPLSoGRE from the Encapsulation Priority drop down list.

    Fabric Management

    Be sure to enable the fabric management option to support greenfield or brownfield fabric onboarding.

    Contrail Configuration

    Click +Add to enter the following keys and values:

    • Key: CONTROL_NODES, Value: IP address of the control node

    • Key: PHYSICAL_INTERFACE, Value: interface name

    • Key: TSN_NODES, Value: IP Address of the Contrail service node. In this example, services are provided by the control node, so the controller’s IP address is specified.

    • Key: API__DEFAULTS__enable_latency_stats_log, Value: TRUE

    • Key: API__DEFAULTS__enable_api_stats_log, Value: TRUE

    Note

    The API defaults keys are optional. They enable the Config Nodes Response Size and Time graph in the Infrastructure >Cluster screen.

  7. Click Next button to proceed to control node assignment.
  8. Click the arrow next to the Cluster server to assign the Cluster server as the Control Node and click the Next button to proceed to the Orchestrator Nodes page.
  9. Click the arrow next to the Cluster server to assign the Cluster server as an OpenStack node. Select the Show Advanced check box.

    If your compute node is a VM that runs on an ESXI hypervisor, then any VMs you provision on the compute VM will be nested (a VM running on a VM). To support a nested VM in an ESXI environment you must add the nova-related configuration shown in the Customize configuration section in Figure 2 to support the required QEMU hypervisor.

    Figure 2: Configuration for ESXI Environment
    Configuration for ESXI Environment
  10. Click +Add under Kolla Globals and specify the below keys and values.
  11. Be sure to specify the Kolla keys and values listed in Table 3. Click the Next button to proceed to the compute node screen when done.

    Table 3: Add Kolla Globals

    Field

    Description

    Kolla Globals

    Keys and Values

    Enter the following keys and values:

    • enable_haproxy—no

    • enable_ironic—no

    • enable_swift—yes

    • swift_disk_partition_size—20GB

  12. Assign both compute nodes along with their vRouter gateways. The specified gateway IP address should match the address assigned to the leaf device interface (physical or IRB) that the compute node servers are connected to. Figure 3 shows the first compute, which is attached to leaf 1, being assigned. Repeat this step for the second compute node, being sure to specify the correct vRouter IP for its attachment to leaf 2, i.e., 10.1.13.254.
    Figure 3: Compute Nodes and Gateways
    Compute Nodes and Gateways

    Table 4: Assigned Compute nodes

    Field

    Description

    Default vRouter Gateway

    Enter the IP address of the default vRouter gateway. This should be 10.1.12.254 and 10.1.13.254 for compute nodes 1 and 2, respectively.

  13. When done, click the Next button to proceed to the service node assignment screen. For this example, you assign the service node functionality to the controller node. When done, click the Next button to proceed to the Contrail Insights configuration screen.
  14. Assign the Insights server as an AppFormix_Platform_Node and the Contrail Insights Flows server as an appformix_bare_host_node.
  15. Click the Next button to configure the Insights Flows node. Assign this role to the Flow server node, and specify an unused IP address from the fabric management subnet to be used as the Virtual IP (VIP) of the flow server.
  16. Click the Next button to proceed to the cluster overview screen, which summarizes the information provided in the cluster provisioning wizard.
  17. Verify that the information shown in the Cluster overview page is accurate and then click Provision. The cluster provisioning screen is displayed.

    You can use the commands highlighted in the provisioning screen on the Contrail Command server to get summary information about the provisioning process. For a more detailed view, you can display the logs from the Ansible player on the Contrail Command server. To do this, you must first display the ID assigned to the Ansible player by running the following command:

    Use the process ID obtained from the docker ps command to get detailed Ansible playbook logs as shown.

    Note

    If there is an error in the provisioning process due to time-outs, registry download limits, or other network related issues, you can click the Reprovision button to retry the installation. If this doesn't resolve the issue, you might need to delete the cluster and start over with the cluster provisioning wizard.

  18. When the installation process completes without error, you can click the Proceed to login button to login to the new cluster.
  19. Run the following command on the cluster node to verify that the installation is successful.
  20. You can also run the contrail-status command on both compute nodes.