Contrail Cloud Hardware Nodes
A Contrail Cloud environment includes servers that function as compute nodes, storage nodes, and management nodes. The compute nodes provide compute services, the storage nodes provide storage services, and a variety of management nodes are used to manage components for the compute and storage nodes. These nodes are interconnected to one another and connect to the fabric to connect to the larger network.
This section provides an overview of the compute, storage, and management nodes in the Contrail Cloud infrastructure. Other system nodes, including the SDN controller managed using Contrail Networking and the orchestrator that is managed using Red Hat Openstack, are discussed in other sections of this document.
Hardware Nodes Overview
Figure 1 illustrates the hardware nodes in a Contrail Cloud infrastructure.
The compute, storage, and management nodes are interconnected endpoint servers. The servers at this layer connect directly to the fabric to access the larger network.
During the Contrail Cloud installation, the compute, storage, and control nodes are instantiated and the networks connecting these devices are established. The configuration of these nodes and the initial network is provided by pre-configured YAML files that are downloaded onto the jump host. These YAML files can then be customized by users after the initial installation. See Contrail Cloud Configuration File Structure Overview in the Reference Architecture for Contrail Cloud.
The other devices that work at the endpoint layer with the compute, storage, and management devices include the jump host, the management switches, and the BMS nodes. Table 1 provides a summary description of each hardware node.
Table 1: Hardware Node Summary
Bare metal server (BMS)
Physical server that is not provisioned to support virtualization but requires overlay network access to participate in the Contrail Cloud environment. BMS nodes are only present in Contrail Cloud environments that require the non virtualized functions provided by the BMS and are therefore not present in all Contrail Cloud environments.
Contrail Cloud manages some services for the bare metal servers using the Contrail Services Node, which is also called the TOR Services Node (TSN) in Red Hat OpenStack.
Server provisioned to support virtualization; hosts virtual machines.
Server that hosts controller nodes. Controller nodes are VMs responsible for controlling Contrail Cloud functions. See Control Host.
A server that performs the following functions in Contrail Cloud:
Switches that are responsible for managing the IPMI, Management, Provisioning, and Intranet (for the jump host only) networks in Contrail Cloud.
The management switches are not managed by Contrail and are not included in all Contrail Cloud implementations.
Server whose purpose is storing data. Storage nodes run Red Hat Ceph storage software in Contrail Cloud.
The jump host is a server that hosts the undercloud VM. Figure 2 illustrates the high-level function of the jump host.
The jump host:
hosts the undercloud. The undercloud is a VM responsible for provisioning and managing all control hosts, storage nodes, and compute nodes in a Contrail Cloud. All Contrail-related setup and configuration is performed through the undercloud in a Contrail Cloud.
stores Contrail Cloud configuration-related files. The YAML files that configure Contrail Cloud are stored on the jump host. The Ansible scripts that apply the configurations made in the YAML files to the Contrail Cloud nodes are also stored on the jump host.
hosts the Contrail Command web user interface virtual machine.
runs Red Hat Enterprise Linux with only base packages installed.
A jump host must be operational as a prerequisite for a Contrail Cloud installation. The jump host should not run any virtual machines besides the undercloud and the Contrail Command virtual machines. For a complete list of jump host requirements, see the Deploying Contrail Cloud section of the Contrail Cloud Deployment Guide.
A control host is a hypervisor running on a server that hosts virtualized control functions as controller nodes. Controller nodes are VMs responsible for managing server functions.
Figure 3 illustrates the controller nodes in a control host.
The controller nodes that can run on a control host:
OpenStack Controller—manages Red Hat OpenStack & Ceph storage.
Contrail Controller—manages the configuration and control functions of Contrail Networking.
Contrail Analytics—manages the Analytics function of Contrail Networking.
Contrail Analytics DB—manages the database used by Contrail Analytics.
Appformix Controller—manages Appformix.
Contrail Service Node—also called TOR Services Node in earlier Contrail releases. The Contrail Services node is an optional controller VM used by Contrail Networking to assist with some bare metal server (BMS)-related tasks.
Each control node in a Contrail Cloud environment has an OpenStack, Contrail Controller, Contrail Command, Contrail Analytics, Contrail Analytics DB, and Appformix controller node. The Contrail Service node runs only in environments where there is a need to support bare metal servers.
A compute node is a server that hosts virtual machines that provide services over the network. The services that run on compute nodes vary by network and the documentation of those services is beyond the scope of this guide.
Compute nodes use Ceph storage as a back-end OpenStack storage option for block, object, and file storage services. VM ephemeral disk storage is provided by Ceph. Ceph Storage is included as part of the standard Contrail Cloud bundle. If a compute node does not have access to Ceph storage, it must have locally attached disks to support it’s storage requirements.
The compute nodes in Contrail Cloud use a vRouter to implement data plane functionality. The vRouter options are discussed in Contrail Cloud Software Summary.
A storage node is a server in the Contrail Cloud environment whose purpose is storing raw data. Storage nodes run Red Hat Ceph storage software in Contrail Cloud.
The storage nodes support Ceph OSD. For additional information on using Red Hat Ceph storage software, see Product Documentation for Red Hat Ceph Storage.