Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Remote Compute

Contrail Networking supports remote compute, a method of managing a Contrail deployment across many small distributed data centers efficiently and cost effectively.

Remote Compute Overview

Remote compute enables the deployment of Contrail Networking in many small distributed data centers, up to hundreds or even thousands, for telecommunications point-of-presence (PoPs) or central offices (COs). Each small data center has only a small number of computes, typically 5-20 in a rack, running a few applications such as video caching, traffic optimization, and virtual Broadband Network Gateway (vBNG). It is not cost effective to deploy a full Contrail controller cluster of nodes of control, configuration, analytics, database, and the like, in each distributed PoP on dedicated servers. Additionally, manually managing hundreds or thousands of clusters is not feasible operationally.

Remote Compute Features

Remote compute is implemented by means of a subcluster that manages compute nodes at remote sites to receive configurations and exchange routes.

The key concepts of Contrail remote compute include:

  • Remote compute employs a subcluster to manage remote compute nodes away from the primary data center.

  • The Contrail control cluster is deployed in large centralized data centers, where it can remotely manage compute nodes in small distributed small data centers.

  • A lightweight version of the controller is created, limited to the control node, and the config node, analytics, and analytics database are shared across several control nodes.

  • Many lightweight controllers are co-located on a small number of servers to optimize efficiency and cost.

  • The control nodes peer with the remote compute nodes by means of XMPP and peer with local gateways by means of MP-eBGP.

Remote Compute Operations

A subcluster object is created for each remote site, with a list of links to local compute nodes that are represented as vrouter objects, and a list of links to local control nodes that are represented as BGP router objects, with an ASN as property.

The subclusters are identified in the provision script. The vrouter and bgp-router provision scripts take each subcluster as an optional argument to link or delink with the subcluster object.

It is recommended to spawn the control nodes of the remote cluster in the primary cluster, and they are IGBP-meshed among themselves within that subcluster. The control nodes BGP-peer with their respective SDN gateway, over which route exchange occurs with the primary control nodes.

Compute nodes in the remote site are provisioned to connect to their respective control nodes to receive configuration and exchange routes. Data communication among workloads between these clusters occurs through the provider backbone and their respective SDN gateways. The compute nodes and the control nodes push analytics data to analytics nodes hosted on the primary cluster.

Subcluster Properties

The Contrail Web UI shows a list of subcluster objects, each with a list of associated vrouters and BGP routers that are local in that remote site and the ASN property.

General properties of subclusters include:

  • A subcluster control node never directly peers with another subcluster control node or with primary control nodes.

  • A subcluster control node has to be created, and is referred to, in virtual-router and bgp-router objects.

  • A subcluster object and the control nodes under it should have the same ASN.

  • The ASN cannot be modified in a subcluster object.

Note:

Multinode service chaining across subclusters is not supported.

Inter Subcluster Route Filtering

Contrail Networking Release 2005 supports inter subcluster route filtering (Beta). With this release, a new extended community called origin-sub-cluster (similar to origin-vn) is added to all routes originating from a subcluster.

The format of this new extended community is subcluster:<asn>:<id>.

This new extended community is added by encoding the subcluster ID in the ID field within the extended community. The subcluster ID helps you determine the subcluster from which the route originated, and is unique for each subcluster. For a 2-byte ASN format, type/subtype is 0x8085 and subcluster ID can be 4-byte long. For a 4-byte ASN format, type/subtype is 0x8285 and subcluster ID can be 2-byte long.

You create a routing policy matching this new extended community to be able to filter routes. Routing policies are always applied to primary routes. However, a routing policy is applied to a secondary route in the following scenarios:

  • There is no subcluster extended community associated with the route.

  • Self subcluster ID does not match the subcluster ID associated with the route.

Figure 1 shows a data center network topology. All routing policies are configured on virtual networks in the main data center, POP0. Consider the following example routing policy:

Where, 1 and 2 are the subcluster IDs of subclusters POP1 and POP2 respectively.

In this example, for routes directed to POP0 from subclusters POP1 and POP2, the LP will be changed. Routes that do not match the extended community are rejected. Default routes with no extended community are also rejected.

Provisioning a Remote Compute Cluster

Contrail Networking enables you to provision remote compute using an instances.yaml file. Installing a Contrail Cluster using Contrail Command and instances.yml shows a bare minimum configuration. The YAML file described in this section builds upon that minimum configuration and uses Figure 1 as an example data center network topology.

Figure 1: Example Multi-Cluster TopologyExample Multi-Cluster Topology

In this topology, there is one main data center (pop0) and two remote data centers (pop1 and pop2.) pop0 contains two subclusters: one for pop1, and the other for pop2. Each subcluster has two control nodes. The control nodes within a subcluster, for example 10.0.0.9 and 10.0.0.10, communicate with each other through iBGP.

Communication between the control nodes within a subcluster and the remote data center is through the SDN Gateway; there is no direct connection. For example, the remote compute in pop1 (IP address 10.20.0.5) communicates with the control nodes (IP addresses 10.0.0.9 and 10.0.0.10) in subcluster 1 through the SDN Gateway.

To configure remote compute in the YAML file:

  1. First, create the remote locations or subclusters. In this example, we create data centers 2 and 3 (with the names pop1 and pop2, respectively), and define unique ASN numbers for each. Subcluster names must also be unique.

  2. Create the control nodes for pop1 and pop2 and assign an IP address and role. These IP addresses are the local IP address. In this example, there are two control nodes for each subcluster.

  3. Now, create the remote compute nodes for pop1 and pop2 and assign an IP address and role. In this example, there are two remote compute nodes for each data center. The 10.60.0.x addresses are the management IP addresses for the control service.

The entire YAML file is contained below.

Example instance.yaml with subcluster configuration

Note:

Replace <contrail_version> with the correct contrail_container_tag value for your Contrail Networking release. The respective contrail_container_tag values are listed in README Access to Contrail Registry.

Automatically Deploy Remote Compute Using RHOSP/TripleO

A Distributed compute node (DCN) architecture is designed for edge use cases, allowing remote compute and storage nodes to be deployed remotely while sharing a common centralized control plane. The DCN architecture allows you to strategically position workloads closer to your operational needs for higher performance.

Starting in Contrail Networking 21.4, you can deploy remote compute automatically using RHOSP/TripleO for edge use cases.

Example Topology

You can build the setup in different ways by providing the control plane elements. Figure 2 shows the example setup for deploying the remote compute automatically.

Figure 2: Example Setup for Deploying the Remote Compute AutomaticallyExample Setup for Deploying the Remote Compute Automatically

In this example:

  • Setup is explained without spine-leaf and/or the DCN details. See Figure 2.

  • Describe primarily on the Contrail specific configuration. All scripts provided are only examples. For deployment preparation instructions, see RedHat documentation.

  • All control plane functions are provided as virtual machines hosted on the KVM hosts:

    • VM 1—Kubernetes managed: Contrail Control plane (Kubernetes master)

    • VM 2—Kubernetes managed: Contrail Control service for remote compute (non-master Kubernetes node with a subcluster label)

    • VM 3—RHOSP undercloud

    • VM 4—RHOSP overcloud: OpenStack Controller

    • VM 5—RHOSP overcloud: OpenStack remote compute with subcluster param

  • Contrail control plane uses Kubernetes cluster. You can do the same with OpenShift.

Prepare Kubernetes Managed Hosts

To prepare Kubernetes managed hosts:

  1. Create two Contrail master and Contrail controller machines with the following specifications:

    • CentOS 7

    • 32GB RAM

    • 80GB SDD

  2. Deploy the Contrail Control plane in a Kubernetes cluster with at least one worker node using tf-operator.

    The worker will be used for Contrail Control serving a subcluster (one for testing and minimum two for production). In the case of OpenShift, see the Contrail Control plane's Readme file.

    If RHOSP uses TLS everywhere, you must deploy the Contrail Control plane with a CA bundle that includes both your own root CA and IPA CA data. For example:

  3. Label worker(s) node with subcluster label.

  4. Ensure Kubernetes nodes can:

    • Connect to external, internal API, and tenant RHOSP networks.

    • Resolve RHOSP FQDNs for overcloud VIPs for external, internal API, and Control plane networks.

      You can obtain FQDNs of overcloud nodes from /etc/hosts of one of the overcloud node.

    For example:

  5. Edit the manager manifest to add one more control with a node selector and a subcluster parameter.

    Add a record to each subcluster's controls:

Prepare OpenStack Managed Hosts

To prepare OpenStack managed hosts:

  1. Prepare OpenStack hosts and run undercloud setup.

  2. Run the following script to generate remote computes heat templates for kernel, DPDK, and SR-IOV:

    This script generates a network_data_rcomp.yaml file and the set of files for each subcluster. For example:

    • tripleo-heat-templates/roles/RemoteCompute1.yaml

    • tripleo-heat-templates/roles/RemoteContrailDpdk1.yaml

    • tripleo-heat-templates/roles/RemoteContrailSriov1.yaml

    • tripleo-heat-templates/environments/contrail/rcomp1-env.yaml

    • tripleo-heat-templates/network/config/contrail/compute-nic-config-rcomp1.yaml

    • tripleo-heat-templates/network/config/contrail/contrail-dpdk-nic-config-rcomp1.yaml

    • tripleo-heat-templates/network/config/contrail/contrail-sriov-nic-config-rcomp1.yaml

  3. Ensure that the generated files and other templates are customized to your setup (storage, network CIDRs, and routes). For more information, see RedHat documentation.

  4. Prepare Contrail templates using the generated network data file:

    1. Modify contrail-services.yaml to provide data about the Contrail Control plane on Kubernetes.

    2. Enable Contrail TLS 4.2.1 if RHOSP does not use TLS everywhere or use self-signed root CA.

      Prepare self-signed certificates in environments/contrail/contrail-tls.yaml.

      If RHOSP uses TLS everywhere, do the following:

      1. Make a CA bundle file.

      2. Prepare an environment file ca-bundle.yaml.

    3. Prepare central site-specific parameters.

    4. Prepare VIP mapping.

    5. Generate role and network files using heat templates.

  5. Deploy the central location.

  6. Enable keystone authentication for the Kubernetes cluster, if it is not already enabled.

  7. Deploy the remote sites. For example, export environment from the central site.

    1. Deploy the remote site 1.

    2. Follow the next steps from RedHat documentation.

  8. Deploy the edge sites with storage.

    Ensure that Nova cell_v2 host mappings are created in the Nova API database after the edge locations are deployed.

    Run the following command on the undercloud:

Monitoring Remote Compute Clusters in Contrail Command

Starting in Contrail Networking Release 21.4.L1, you can use the Contrail Command graphical user interface (GUI) to monitor remote compute routing clusters.

You can gather the following data in an easy-to-read graphical presentation about any remote compute routing cluster in your environment:

  • Nodes

  • Updates Sent per Node

  • Updates Received per Node

  • BGP CPU Share per Node

  • BGP Memory Usage per Node

To view monitoring data about remote compute routing clusters, navigate to the Monitoring > Dashboards > Routing Cluster tab in Contrail Command.

Monitoring and Configuring BGP Routers for Remote Compute in Contrail Command

Starting in Contrail Networking Release 21.4.L1, enhancements were made to the Contrail Command graphical user interface (GUI) that allowed users to better monitor and configure BGP Routers with remote compute routing clusters..

These enhancements included:

  • On the Create BGP Router page in the Infrastructure > Cluster > Advanced > BGP Routers page, you can now connect a remote routing cluster to a BGP Router.

    This option is now available when you navigate to the Advanced Options > Routing Cluster Id drop-down menu. You can select a remote compute routing cluster, which will be available as an option in the Routing Cluster Id drop-down menu, from the page.

  • On the BGP Routers tab in the Infrastructure > Cluster > Advanced page, you can now view the BGP router connections to the remote compute routing clusters.

You, notably, cannot edit or change a router cluster from configured BGP routers within Contrail Command. You cannot, for instance, edit or change the router cluster from the router cluster drop down menu on the Infrastructures > Cluster > Advanced Options > BGP Router > Create > Advanced Options > Routing Cluster-ID > Routing Cluster ASN page.

Viewing the Virtual Router Connected to Remote Compute Clusters in Contrail Command

Starting in Contrail Networking Release 21.4.L1, you can view the virtual router connected to a remote compute routing cluster in Contrail Command. The Virtual Routers will appear in the table on the Infrastructure > Cluster > Advanced > Virtual Routers page

BGP as a Service (BGPaaS) Support in Remote Compute Clusters

Starting in Contrail Networking Release 21.4.L2, you can configure BGP as a Service (BGPaaS) in remote compute clusters.

You configure BGPaaS in a remote compute cluster in the same manner that you would configure outside of a remote compute cluster. For information on configuring BGPaaS in Contrail Networking, see BGP as a Service.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
2005
Contrail Networking Release 2005 supports inter subcluster route filtering (Beta).
21.4.L2
Starting in Contrail Networking Release 21.4.L2, you can configure BGP as a Service (BGPaaS) in remote compute clusters.
21.4.L1
Starting in Contrail Networking Release 21.4.L1, you can use the Contrail Command graphical user interface (GUI) to monitor remote compute routing clusters.
21.4.L1
Starting in Contrail Networking Release 21.4.L1, enhancements were made to the Contrail Command graphical user interface (GUI) that allowed users to better monitor and configure BGP Routers with remote compute routing clusters.
21.4.L1
Starting in Contrail Networking Release 21.4.L1, you can view the virtual router connected to a remote compute routing cluster in Contrail Command.
21.4
Starting in Contrail Networking 21.4, you can deploy remote compute automatically using RHOSP/TripleO for edge use cases.