Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Server Node Network Connections in Contrail Cloud

A variety of network connections run between servers to interconnect the storage, compute, and management nodes in a Contrail Cloud environment. Multiple network connections are also used to connect these nodes to the EVPN-VXLAN fabric layer to access the higher-layer networks.

These networks connections are covered in the following sections.

Server Node Network Connections in Contrail Cloud Overview

A variety of network connections run between servers to interconnect the storage, compute, and management nodes in a Contrail Cloud environment. Multiple network connections are also used to connect these nodes to the EVPN-VXLAN fabric layer to access the higher-layer networks.

The majority of the network connections—all networks except the IPMI network, which is established outside of the Contrail Cloud deployment, and the Intranet network—are created automatically using information in the YAML configuration files when Contrail Cloud is initially deployed. No user action is required on the server side devices to create these networks. VLANs, however, must be configured on the switches in the EVPN-VXLAN Fabric to ensure traffic can be passed between the server devices and the EVPN-VXLAN Fabric switches. See Contrail Cloud Deployment Guide.

Figure 1 illustrates how the server nodes are connected to devices using the available networks in a representative architecture.

Figure 1: Server Network ConnectionsServer Network Connections

Table 1 summarizes the purpose of each network connection.

Table 1: Networking in Contrail Cloud Summary
Network Purpose

IPMI

Provides hardware management of servers.

IPMI services are mostly used by the Openstack Ironic service and are established outside of the Contrail Cloud installation. IPMI services can be used by the server nodes in Contrail Cloud.

Intranet

Provides user access to the jump host.

Provides access to satellite repositories for the jump host.

Provides access to the Contrail Command web user interface.

Provides external network access via SNAT from the control plane network.

Provisioning or Control Plane

Deploys new nodes using PXE booting and to install software packages on overcloud bare metal servers. The provisioning/control plane network is used by Red Hat Director and is predefined before the creation of the undercloud.

Internal API

Provides communication with the OpenStack and Contrail Networking services, including Keystone, Nova, and Neutron using API communication, RPC messages, and database communication.

External

Provides tenant administrators access to the OpenStack Dashboard (Horizon) graphic interface, the public APIs for Openstack services, public Contrail APIs, and the Appformix WebUI. Commonly used as a path to the intranet.

Tenant

Supports overlay data-plane traffic—VXLAN and MPLS over UDP—and Contrail Controller to Contrail vRouter control plane traffic.

Storage

Supports storage data traffic for Ceph, block storage, NFS, iSCSI, and any other storage types.

Storage Management

Provides Ceph control, management, and replication traffic.

Management

(Optional) Provides direct access to compute nodes without having to send traffic through the jump host.

Can be used for DNS and NTP services.

The IPMI, Management, and Provisioning networks are connected via management switches and the networks are stretched between switches using trunk connections for VLANs or by using a separate VXLAN tunnel for each network. The other networks connect to the leaf switches in the IP Fabric and use VXLAN with an EVPN control plane to connect between racks.

The External network contains the externally routable API and UI IP addresses for various controller functions. These functions are generally in the same subnet. The VLAN for the External network is configured to be in a VXLAN, which terminates in a VRF instance that is configured to route between the network where the tenant user packets arrive from the External network.

The other networks connected directly to the IP Fabric in this representative architecture will each have a separate subnet for each rack, and routing is used to connect the subnets within a VXLAN. The routing occurs by placing an IRB interface on either a leaf device (edge routed) or a spine device (centrally routed).

Table 2 describes our recommendations regarding how the networks should be configured.

Table 2: Recommended Network Subnets and Implementations
Network Subnet Description Implementation

IPMI

IPMI is typically an external service that can be used by devices that can route to it. IPMI is not established by Contrail Cloud.

IPMI is typically not spanned over racks. It can be spanned over racks using VXLAN.

Intranet

Administrative access to jump host.

Specific port configuration

Provisioning

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Internal API

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Internal API (leaf)

One subnet per rack

Layer 3 routing between racks

External

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Tenant

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Tenant (leaf)

One subnet per rack

Layer 3 routing between racks

Storage

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Storage (leaf)

One subnet per rack

Layer 3 routing between racks

Storage Management

Layer 2 single subnet

Span over racks using VLAN or VXLAN

Storage Management (leaf)

One subnet per rack

Layer 3 routing between racks

Management

Layer 2 single subnet

Span over racks using VLAN or VXLAN

When a network is composed of multiple subnets—the reference architecture illustrated earlier in this document has one subnet per rack—a subnet that contains all the rack subnets is created. This subnet is called the supernet. The subnet address of the supernet is used to configure static routes on servers to ensure proper traffic flow between racks.

Figure 2 illustrates how VLANs, VTEPs, IRBs, and VRFs are configured in the networks that connect end systems to gateway routers, spine switches, and leaf switches when centralized routing—traffic is transferred between Layer 2 and Layer 3 using IRB interfaces on spine devices—is enabled.

Figure 2: VLANs, VTEPs, IRBs, and VRFsVLANs, VTEPs, IRBs, and VRFs

Table 3 summarizes how VLANs are deployed per device.

Table 3: VLAN Summaries
Device VLAN Summary

Management Switch

The IPMI, Management, Provisioning, and Intranet networks all connect the management switch to a server on different ports.

The port traffic for each network arrives on the management switch untagged and is configured into the same VLAN.

The VLAN is extended between the management switches on different racks.

Leaf Switch (EVPN-VXLAN Fabric)

Traffic from the external, internal API, tenant, storage, and storage management networks arrives on the leaf switch’s high-speed interfaces from the servers in different VLANs.

The VLANs are configured in VTEPs.

Leaf ports are configured with logical interfaces that have VLANs for the networks that will be attached by servers to that port.

Each VLAN on each switch is configured into a VXLAN VTEP and EVPN advertises host routes for each connected server to the spines.

The VLANs used for each network are specified in the file overcloud-nics.yml.

Spine Switch (EVPN-VXLAN Fabric)

The spine switches are configured with a VTEP for each of the Internal API, Tenant, Storage and Storage-Mgt networks, and each of these are connected to an IRB interface whose IP address is that of the supernet. Each spine switch has a VRF that receives routes to each host from the leaf switches.

SDN Gateway Router

The SDN gateways are configured with a VTEP and VRF for the External network. Each SDN gateway will advertise a route for the External network to peers outside the Layer 2 network.

Contrail Cloud Network Types

Table 4 summarizes the networks used by devices or VMs at the server layer in Contrail Cloud.

Table 4: Device & VM Network Summary
Role Network
Intranet Provisioning Internal API External Tenant Storage Storage Management Management

Jump host (Undercloud VM)

Openstack Controller

 

Optional

Optional

Contrail Controller

Optional

Optional

Contrail Analytics

Optional

Optional

Contrail Analytics DB

Optional

Optional

Contrail Service Node (ToR Service Node)

Optional

Optional

Appformix Controller

Optional

Optional

Control Host (Running all Controller VMs)

Optional

Compute Node

Optional

Storage Node

Optional

The use of a management network to enable direct access to nodes without going through the jump host is optional, but often recommended.

Contrail Cloud Roles and Networks

The networks in Contrail Cloud connect nodes to management switches or to devices that provide network access. In this reference architecture, the devices that connect the server nodes in the Contrail Cloud to the higher-layer network are the leaf devices in the EVPN-VXLAN IP Fabric.

Table 5 summarizes the network connections for the nodes in Contrail Cloud.

Table 5: Server Layer Node Connections
Node Management Switch Connections Networking Device Connections

Jump host

IPMIProvisioningManagementIntranet

None

Controller Host

IPMIProvisioningManagement

ExternalInternal APITenantStorageStorage Management

Compute Node

IPMIProvisioningManagement

Internal APITenantStorage

Storage Node

IPMIProvisioningManagement

StorageStorage Management

Updating Configurations and Software in Contrail Cloud

To ensure consistent configuration of your network nodes, the following changes to nodes in the overcloud—compute nodes, storage nodes, and control hosts—should always be made through Contrail Cloud:

  • Software updates.

  • Configuration changes, including configuration changes to reflect the removal or addition of a new node.

Bypassing Contrail Cloud to apply a change in a lower layer often leads to configurations that are later overwritten.

Configuration Files in Contrail Cloud

A series of pre-configured YAML file templates are provided as part of a Contrail Cloud installation. These user-configurable YAML files are downloaded onto the jump host server during the initial phase of the Contrail Cloud installation. The YAML files can be accessed and edited by users from within the jump host and the updated configurations can be deployed using Ansible playbook scripts. See Deploying Contrail Cloud for additional information on YAML file locations and configuration updating procedures.

Table 6 lists commonly-used YAML file parameters in Contrail Cloud and provides a summary of the purpose of the parameter.

Table 6: YAML File Parameters

YAML File Parameter

Purpose

site.yml

global:

DNS, NTP, domain name, time zone, satellite URL, and proxy configuration for the deployment environment.

jumphost:

Provision NIC name definition and PXE boot interface for the jump host.

control_hosts:

Control host parameters. Includes disk mappings for bare metal servers and control plane VM sizing per role for functions like analytics.

compute_hosts:

Parameters for SR-IOV, DPDK, and TSN in compute nodes. Root disk configuration per hardware profile.

storage_hosts:

Ceph and block storage profiles definition for storage nodes.

undercloud:

Nova flavors for roles. Applicable when using additional hardware profiles.

overcloud:

Hardware profile and leaf number-based:

  • disk mappings

  • network definitions—names, subnets, VLANs, DHCP pools, and roles for network. Other network definitions like TLS cert, keystone LDAP backend enablement, post deployment extra actions, tripleO extra configurations

ceph:

Ceph enablement and disk assignments (pools, OSDs) on storage nodes.

ceph_external:

Externally deployed Ceph integration parameters.

appformix:

Enable HA, VIP IPs, and network devices monitoring for Appformix.

inventory.yml

inventory_nodes:

Name, IPMI IP, Ironic driver used for LCM, root disk, and other related functions for all Contrail cluster nodes.

control-host-nodes.yml

control_host_nodes:

Internal IP and DNS (per control node) for control hosts and the control plane.. Statically added IPs for controllers need to be outside of DHCP pools for networks that use them.

control_host_nodes_network_ config:

Bridges, bonds, DHCP/IP, and MTU for control hosts.

control_hosts:

VM interface to bridge on control-host mapping.

overcloud-nics.yml

contrail_network_config:controller_network_config:appformixController_network_config:computeKernel_network_config:compute_dpdk_network_config:cephStorage_network_config:

Interface to network mapping, routes, DHCP-IP allocation, bonds, VLAN to interface maps, and bond options for control, storage, and compute nodes.

compute-nodes.yml

compute_nodes_kernel:compute_nodes_dpdk: compute_nodes_sriov:

Mapping hosts from inventory to compute roles and profiles for compute nodes.

storage-nodes.yml

storage_nodes:

Names of storage nodes.

vault-data.yml

global:

satellite key and contrail user password for the Red Hat Open Stack Vault function.

undercloud: overcloud: control_hosts:

VM & Bare metal server (BMS) passwords for Contrail Cluster nodes and the undercloud when using the Red Hat Open Stack Vault function.

appformix:

MySQL and RabbitMQ passwords for Appformix when using the Red Hat Open Stack Vault function.

ceph_external:

Client key used by Ceph External with the Red Hat Open Stack Vault function..

inventory_nodes:

IPMI credentials for Contrail cluster nodes when using the Red Hat Open Stack Vault function.