Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Step 1: Begin

 

You can use Contrail Networking, a full-featured software-defined networking (SDN) solution, to manage and configure your underlay and overlay networks all from a single user interface, Contrail Command. Contrail Command provides a central dashboard that makes it easy to configure networks, administer network policies, and create service chains for services such as load balancing, firewall, and NAT.

First, we’ll show you how to install Contrail Command and set up a Contrail Networking cluster for Release 2005.1. Then, we’ll show you how to set up the underlay network on a greenfield fabric, and create the overlay networks that interconnect the compute endpoints. The compute endpoints are the bare-metal servers or virtual machine (VM) workloads that the compute administrator configures and attaches to the leaf or top-of-rack switches.

Install and Deploy Workflow

To install Contrail Command, first you download and run the Contrail Command deployer. This deploys Contrail Command as a set of Docker containers. Then, you use Contrail Command to set up your Contrail Networking cluster. A Contrail Networking cluster is the set of controllers and supporting applications that manage the underlay and overlay networks.

Here's the example Contrail Networking cluster you'll be setting up:

Note

The installation instructions that we provide assume that you’re sitting at the network administrator’s station. We refer to this station as your local machine.

To minimize the number of servers used in this example, you’ll set up a cluster of collocated servers, which is different from a typical deployment. In a typical deployment, the cluster components are implemented as discrete nodes that can be distributed over multiple leaf switches to provide higher performance and availability.

Here’s a description of the networks shown:

Network

Description

Fabric underlay

The regular IP spine-and-leaf network made up of switches and routers.

Management

The out-of-band management network that the Contrail Networking controller uses to discover and configure the switches in the fabric underlay network.

Intranet

A general network that provides access for the network administrator to the Contrail Networking cluster, and for the cluster to the Internet. This can be the corporate intranet, or a lab network connected to the corporate intranet, or any other scheme that provides external management connectivity to the cluster.

The Contrail Networking cluster consists of a number of servers or VMs providing the Contrail Networking and Contrail Insights functionality. You can create a cluster with any number of servers or VMs, depending on your needs.

Here’s an example of a cluster made up of four servers followed by a description of the components running on the servers.

Servers

Description

Connectivity

Contrail Command

Contains the Contrail Command application, which provides the UI that translates user intents to internal API calls to the other components.

Connects to the external network on eth0, reachable from the network administrator’s browser.

Connects to the fabric management network on eth1 to configure the other Contrail Networking components.

Contrail Cluster

Contains the collection of containers that provide the main Contrail Networking functionality, including the control, orchestration, service, and compute node roles.

In this example, we’re placing most of the functionality into one server to reduce the number of servers you require.

Connects to the external network on eth0.

Connects to the fabric management network on eth1 to configure the switches in the fabric.

Connects to the fabric underlay on eth2. This is the main interface to the fabric to exchange routes and to provide and receive services.

Contrail Insights

Contains the Contrail Insights application, formerly called AppFormix, which is an optional software application that allows you to monitor and troubleshoot VMs, containers, and physical switches and routers.

Connects to the external network on eth0.

Connects to the fabric management network on eth1 to communicate with the other Contrail Networking components.

Contrail Insights Flows

Contains the Contrail Insights Flows application, an optional software application that enables you to view sFlow-derived telemetry for devices in the fabric.

Connects to the external network on eth0.

Connects to the fabric management network on eth1 for out-of-band flow monitoring.

Note: If you want to support in-band flow monitoring, then connect the server directly to the fabric underlay.

Note: All servers in this example have at least four cores, 64-GB memory, 300-GB hard drive with at least 100 GB in the “/” partition, and Internet access for downloading software packages.

This component breakdown represents just one possible example. You can create a cluster that has a different number of servers and a different distribution of components and roles.

We refer to this network configuration and this component breakdown in various examples later. All IP addresses shown are /24.

Before You Begin

In preparation for installation, look through the Contrail Networking Supported Platforms List   for the compatible OS and prerequisite software versions for the Contrail Networking release you want to set up, and the README Access to Contrail Registry 20XX   file for the container tag to use.

Here are the software versions and container tag that we use in this example:

Software

Version

Contrail Networking

2005.1

Container tag

2005.1.66

OpenStack

Queens

OS

CentOS 7.8 with Linux kernel version 3.10.0-1127

Docker

Docker Community Edition 18.03.1

Additionally, you’ll need access to the Juniper Networks container registry (hub.juniper.net). If you don’t have access to the registry, e-mail contrail-registry@juniper.net to get your username and password.

Finally:

  1. Make all the physical connections:
    1. Connect the leaf and spine switches together in the fabric underlay.
    2. Connect the management interface of each switch to the management network.
    3. Connect the Contrail Networking servers to the management network and to the intranet.
    4. Connect the Contrail Cluster server to a leaf switch in the fabric underlay.
  2. Prepare the servers:
    1. Install a fresh OS on each server. If you’re installing CentOS, you can select the minimal CentOS install, which installs much faster than the full CentOS install.
    2. Assign static IP addresses to each interface, ensuring that the IP addresses are within the proper subnets. You can assign the IP addresses during OS installation (easiest), or after. If you assign IP addresses after the OS installation, you’ll need to make the changes directly in /etc/sysconfig/network-scripts/ifcfg-xxx.
    3. Assign meaningful non-FQDN hostnames to each server (for example, contrail-command, contrail-cluster, contrail-insights, contrail-insights-flows). You can assign the hostnames during OS installation (easiest), or you can assign hostnames after. If you assign hostnames after the OS installation, you’ll need to make the changes directly in /etc/hosts and use the hostnamectl set-hostname command. If you add an entry into the/etc/hosts file, use the IP address of the management interface (that is, the IP address for eth1 in this example).
    4. Create a root user and set a root password as part of OS installation.

    For more details on how to prepare the servers, see How to Install Contrail Command and Provision Your Contrail Cluster.

  3. Prepare the switches:
    1. Attach a terminal or laptop to the serial console port of the switch and log in to the CLI. You’ll need console access in this phase because the next step resets the switches, resulting in loss of management port configuration. Most laptops no longer have serial ports so if you’re using a laptop, you’ll likely need an RJ-45 to DB-9 adapter and a USB to DB-9 adapter.
    2. Reset the switch to zero (request system zeroize).
    3. Repeat Step a and Step b for all switches in the fabric underlay.

Install Contrail Command

To install Contrail Command, first you'll install the prerequisite software packages on the Contrail Command server. Next, you'll create a YAML file containing the Contrail Command server information. Finally, you'll download and run the Contrail Command deployer image.

The server or VM where you run the Contrail Command deployer image can be different from the server or VM where you install Contrail Command. This example runs the Contrail Command deployer on the same server that later runs Contrail Command.

  1. From your local machine, log in to the Contrail Command server or VM using SSH.
  2. Install and start Docker.
    1. Install the prerequisite software used by YUM and Docker.
    2. Add the Docker Community Edition repository to the list of YUM repositories for this machine.
    3. Install the version of Docker supported by the release of Contrail you are installing. We install Docker Community Edition 18.03.1 here as an example.
    4. Start Docker.
    5. Enable Docker so that it automatically restarts when the server or VM reboots.
  3. Retrieve the Contrail Command deployer image from the Juniper Networks container registry.
    1. Log in to the registry.

      Enter your hub.juniper.net credentials when prompted.

    2. Pull the Contrail Command deployer image for the release you are installing. For example:
  4. Create the command_servers.yml configuration file on the Contrail Command server. This file provides the following information to the Contrail Command deployer:
    • The IP address and login credentials of the server where you want to install Contrail Command. This is the Contrail Command server that you just set up.

    • The URL and login credentials of the Juniper Networks container registry and the container tag to use.

    • The passwords that you want to set for Contrail Command, including the main (keystone admin) password to log in to the UI.

    Here’s a basic but fully functional example:

  5. Run the contrail-command-deployer container to deploy the Contrail Command containers. In this example, the command_servers.yml file is located in the /root directory.

    This command runs in the background (detached mode) and returns right away. To track the progress of the command:

  6. Verify that the Contrail Command containers are running.

    The contrail_command container is the GUI and the contrail_psql container is the database. Both containers should have a status of Up.

    The contrail_command_deployer container should have a status of Exited because it exits when the installation is complete.

Create the Contrail Networking Cluster

Now that you’ve installed Contrail Command, let’s use it to create the Contrail Networking cluster.

  1. Open a browser on your local machine and navigate to Contrail Command on port 9091 (for example, https://10.228.196.101:9091). Use the Contrail Command server IP address that is reachable from your local machine, which in this example is the eth0 IP address.

    Leave the Select Cluster field blank and log in using the admin credentials you specified in the keystone section of the command_servers.yml file.

    When you log in to Contrail Command and there is no preexisting cluster, which is the case for a fresh install, you start in the Inventory step of the Setup wizard that guides you through cluster creation. The left-nav bar tracks your progress through these steps.

  2. Let Contrail Command know about the login credentials you use for the Contrail Cluster, Contrail Insights, and Contrail Insights Flows servers or VMs. You will reference these credentials in Step 3 when you add those servers.

    1. Click the Credentials tab and click Add to bring up the Add dialog box. You are adding the usernames and passwords for the Contrail Cluster, Contrail Insights, and Contrail Insights Flows servers or VMs that you set up earlier. You need to add unique credentials only. For example, if you set up two servers with the same username and password, you need to add them only once.
    2. Specify a name for these credentials and the username and password. Click Add to add the credentials.
    3. If you use the same username and password to log in to all servers, then proceed to Step 3. Otherwise, repeat Step a and Step b until you’ve finished adding all unique credentials.
  3. Let Contrail Command know about the Contrail Cluster, Contrail Insights, and Contrail Insights Flows servers.

    1. Click the Server tab and click Add to bring up the Create Server dialog box.
    2. Select Detailed mode.
    3. Fill in the remaining fields and then click Create. Here are the settings used for each server in this example:

      Field

      Contrail Cluster

      Contrail Insights

      Contrail Insights Flows

      Workload Type

      Physical/Virtual Node

      Physical/Virtual Node

      Physical/Virtual Node

      Hostname

      contrail-cluster

      contrail-insights

      contrail-insights-flows

      Management IP

      10.1.1.102

      10.1.1.103

      10.1.1.104

      Management Interface

      eth1

      eth1

      eth1

      Credentials

      <select from drop-down list>

      <select from drop-down list>

      <select from drop-down list>

      Network Interfaces (click Add)

      Name: eth0

      IP Address: 10.228.196.102

      Name: eth0

      IP Address: 10.228.196.103

      Name: eth0

      IP Address: 10.228.196.104

      Name: eth1

      IP Address: 10.1.1.102

      Name: eth1

      IP Address: 10.1.1.103

      Name: eth1

      IP Address: 10.1.1.104

      Name: eth2

      IP Address: 10.1.11.102

    4. Repeat Step a through Step c until you’ve added information about all servers.
    5. Click Next to move to the Provisioning Options step of the Setup wizard.
  4. Fill in the provisioning options fields. When you are done, click Next to move to the Control Nodes step of the Setup wizard.

    Here are the values used in this example:

    Field

    Value

    Notes

    Provisioning Manager

    Contrail Enterprise Multicloud

    The only supported selection.

    Cluster Name

    my-cluster

    Container Registry

    hub.juniper.net/contrail

    Juniper Networks main Contrail registry.

    Insecure

    Unchecked

    Container Registry Username

    <registry-username>

    Container Registry Password

    <registry-password>

    Contrail Version

    2005.1.66

    Container tag.

    Domain Suffix

    local

    Contrail Networking adds this suffix to the server non-FQDN hostname.

    NTP Server

    <ntp-server>

    Default Vrouter Gateway

    10.1.11.2

    The IP address of the interface on the leaf switch that connects to the Contrail Cluster server’s fabric underlay interface. Because this switch has been reset to zero, you’ll need to configure the switch interface with this IP address later.

    Encapsulation Priority

    VXLAN,MPLSoUDP,MPLSoGRE

    The only supported encapsulation.

    Fabric Management

    Checked

    Contrail Configuration (click Add)

    Key: CONTROL_NODES

    Value: 10.1.11.102

    Specifies the fabric underlay interface IP address of the Contrail Control node. In this example, you’ll be installing the control node on the Contrail Cluster server. This IP address is therefore the IP address that connects the Contrail Cluster server to the fabric underlay.

    Key: PHYSICAL_INTERFACE

    Value: eth2

    Name of the interface that connects to the fabric underlay.

    Key: TSN_NODES

    Value: 10.1.11.102

    Specifies the fabric underlay interface IP address of the Contrail Service node. In this example, you will be installing the service node on the Contrail Cluster server. This IP address is therefore the IP address that connects the Contrail Cluster server to the fabric underlay.

  5. Assign a server for the Contrail Control node. In this example, the Contrail Cluster server contains the control node.
    1. Select contrail-cluster from the Available servers list and move it to the Assigned Control nodes list.
    2. Click Next to move to the Orchestrator Nodes step of the Setup wizard.
  6. Assign a server for the OpenStack orchestrator node and configure the OpenStack Kolla parameters. In this example, the Contrail Cluster server contains the orchestrator node.

    You’re required to assign an OpenStack orchestrator node even if you’re using a different orchestrator for instantiating compute endpoints. The OpenStack orchestrator node is used by the cluster.

    1. Select contrail-cluster from the Available servers list and move it to the Assigned Openstack nodes list.
    2. Select Show Advanced to view additional provisioning parameters.
    3. Scroll down to Kolla Globals and click Add.
    4. Add the following key/value pairs to disable ironic bare-metal server provisioning and to enable swift storage.

      Key

      Value

      enable_haproxy

      no

      enable_ironic

      no

      enable_swift

      yes

      swift_disk_partition_size

      20GB

      Note

      We won’t be changing any of the default Kolla passwords. By default, the Contrail Networking cluster username is admin and the password is contrail123.

    5. Click Next to move to the Compute Nodes step of the Setup wizard.
  7. Assign a server for the compute node. The compute node is used by the cluster. In this example, the Contrail Networking cluster contains a single compute node that resides on the Contrail Cluster server.
    1. Select contrail-cluster from the Available servers list and move it to the Assigned Compute nodes list.
    2. Specify the Default Vrouter Gateway. In this example, the gateway is 10.1.11.2 as explained in Step 4.
    3. Click Next to move to the Contrail Service Nodes step of the Setup wizard.
  8. Assign a server for the service node. In this example, the Contrail Cluster server contains the service node.
    1. Select contrail-cluster from the Available servers list and move it to the Assigned Service nodes list.
    2. Specify the Default Vrouter Gateway. In this example, the gateway is 10.1.11.2 as explained in Step 4.
    3. Click Next to move to the AppFormix Nodes step of the Setup wizard.
  9. Assign your AppFormix node servers.Note

    Appformix was renamed Contrail Insights. The Appformix name is still used in the UI.

    The AppFormix node consists of the Contrail Insights server and the Contrail Insights Flows server.

    1. Select contrail-insights from the Available servers list and move it to the Assigned AppFormix nodes list.
    2. Select contrail-insights-flows from the Available servers list and move it to the Assigned AppFormix nodes list.
    3. Scroll down to the Roles field for the contrail-insights-flows server and use the drop-down list to change the role to appformix_bare_host_node.
    4. Click Next to move to the AppFormix Flows step of the Setup wizard.
  10. Assign your AppFormix Flows node servers. In this example, the AppFormix Flows node is the Contrail Insights Flows server.
    1. Select contrail-insights-flows from the Available servers list and move it to the Assigned AppFormix Flows nodes list.
    2. Select Out of Band as the provisioning type.
    3. Specify the Virtual IP Address. This is the virtual IP address that devices use to reach the Contrail Insights Flows node from the fabric management network. This must be an unused IP address in the fabric management subnet (for example, 10.1.1.105).
    4. Click Next to move to the Summary step of the Setup wizard.
  11. Review your settings. Click Provision after verifying your settings.

    The cluster provisioning process begins. The provisioning process time varies by environment and deployment, and can take 90 minutes or more.

  12. (Optional) Monitor the provisioning process by logging in to the Contrail Command server and entering the docker exec contrail-command tail /var/log/contrail/deploy.log command at the Linux prompt.
  13. When the provisioning process finishes, click Proceed to Login from Contrail Command.

    You are redirected to the Contrail Command login screen.

  14. Log in to the cluster:
    • Select Cluster: Select the Contrail Networking cluster you just created from the drop-down list. The cluster is shown by the cluster name followed by a random string.

    • Username: Enter the username for the cluster. The default username is admin.

    • Password: Enter the password for the cluster. The default password is contrail123.