Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Contrail Cloud Software Summary

 

This section discusses overcloud and undercloud roles and the software running on the nodes in Contrail Cloud.

Overcloud Node and Jump Host Software

Figure 1 illustrates the software running on the nodes in the overcloud and on the jump host.

Figure 1: Jump Host and Overcloud Node Component Summary
Jump Host and
Overcloud Node Component Summary

Table 1 summarizes functions for the overcloud and undercloud nodes in Contrail Cloud and indicates whether the function is delivered as a VM in Contrail Cloud.

Table 1: Virtual Machine Summary

FunctionDeployed as VM

Jump host

 

No

 

Undercloud

Yes

Contrail Command

Yes

Controller Hosts

OpenStack

Yes

Contrail Controller

Yes

Contrail Analytics

Yes

Contrail Analytics DB

Yes

Appformix

Yes

Contrail Service Node (TOR Service Node)

Yes

Compute Nodes

vRouter in kernel mode

No

vRouter using DPDK

No

vRouter using SR-IOV

No

Storage Nodes (Ceph Software)

No

Undercloud VM

The undercloud is responsible for provisioning and managing all nodes in the overcloud, which are the controller, compute, and storage nodes. The undercloud runs as a VM on the jump host. The concept of an overcloud that is deployed from an undercloud is defined in the OpenStack TripleO (Openstack on Openstack) project—one of the OpenStack Life Cycle Managers—and implemented in Contrail Cloud with Red Hat OpenStack Director (RHOSPd).

The undercloud VM runs on top of the kernel-based virtual machine (KVM) hypervisor on the jump host. An initial Contrail Cloud deployment starts when the undercloud VM connects to a Juniper Satellite device to install Contrail Cloud software. See Deploying Contrail Cloud.

Contrail Cloud configuration updates are performed through the undercloud by updating YAML configuration files. The updates configured in YAML files are applied when an Ansible script is run. The YAML files template and Ansible scripts are downloaded to the jump host as part of the Contrail Cloud installation procedure. The Ansible scripts generate Heat templates and property files that are used by the Red Hat Openstack Director (RHOSPD) to deploy the configuration changes to the overcloud. The generation of the Heat template is automated; users only have to execute the Ansible scripts to apply a configuration change. See Contrail Cloud Configuration.

The OpenStack instance that constitutes the undercloud uses the following OpenStack services:

  • Glance

  • Heat

  • Ironic

  • Keystone

  • Nova

  • Neutron

  • Swift

The OpenStack instance that constitutes the overcloud runs the following services in the control nodes.

  • Cinder

  • Glance

  • Heat

  • Horizon

  • Keystone

  • Nova

  • Neutron

  • Pacemaker

  • Swift

  • Galera (for HA services)

In Contrail Cloud, the following OpenStack services that are deployed in a basic Red Hat OpenStack overcloud instance are disabled:

  • Ceilometer

  • Gnochi

See Red Hat OpenStack and Contrail Networking Integration in Contrail Cloud for additional information on Contrail Networking and Red Hat Openstack integration in Contrail Cloud.

Controller Host VMs

A control host is a server that hosts controller nodes. Controller nodes are a collection of services running inside VMs responsible for controlling Contrail Cloud functions.

The controller node VMs in Contrail Cloud are the Openstack Controller, Contrail Controller, Contrail Analytics, Contrail Analytics DB, Appformix, and the Contrail Service Node.

See Contrail Cloud Hardware Nodes.

Compute Node Data Plane Options

The compute nodes in Contrail networking—except compute nodes in SR-IOV mode—use a vRouter to implement data plane functionality.

This section discusses compute node vRouter options and includes the following topics:

Compute Nodes VRouter Options Summary

The compute nodes use a vRouter to implement data plane functionality.

Figure 2: vRouter Summary—Compute Nodes
vRouter Summary—Compute Nodes

The data plane interfaces on compute nodes can be configured to support one of the following forwarding methods:

  • Kernel mode—Linux kernel performs vRouter forwarding function.

  • Data Plane Development Kit (DPDK)—a user space with a specified number of cores is defined to perform the vRouter forwarding function. See Configuring the Data Plane Development Kit (DPDK) Integrated with Contrail vRouter for additional information on DPDK in Contrail networking.

  • Single root I/O virtualization (SR-IOV)—vRouter is bypassed. A VM or a container interface connects directly to the NIC.

Your traffic forwarding method choice depends on the traffic profile expectations for each individual compute node. A Contrail Cloud environment can have different compute nodes configured with different interface types, and workloads can be placed on the most appropriate compute node using various technologies, such as OpenStack availability zones.

Kernel vRouter

In kernel mode, vRouter is deployed as a Linux kernel module to perform the vRouter forwarding function.

Data Plane Development Kit (DPDK) Mode

A vRouter in DPDK mode runs in a user space on the compute node. Network traffic is handled by a special DPDK dedicated interface or interfaces that handle VLANs and bonds. A specified number of cores is assigned to perform the vRouter forwarding function.

DPDK vRouters provide higher throughput than kernel vRouters. See Configuring the Data Plane Development Kit (DPDK) Integrated with Contrail vRouter for additional information on DPDK in Contrail networking.

Single Root I/O Virtualization (SR-IOV) Mode

A compute node in SR-IOV mode provides direct access from the NIC to a VM. Because network traffic bypasses the vRouter in SR-IOV mode, no network policy or flow management is performed for traffic. See Configuring Single Root I/O Virtualization (SR-IOV) for additional information on SR-IOV in Contrail networking.

Storage Nodes Ceph Software

The storage nodes run Ceph software. For additional information on using Red Hat Ceph storage software, see Product Documentation for Red Hat Ceph Storage.

We recommend following these guidelines to optimize Ceph software performance in your Contrail Cloud deployment:

  • Storage nodes must be separate from compute nodes in a Contrail Cloud environment. Hyperconverged nodes are not supported.

  • Ceph requires a minimum of three storage nodes to operate.