Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Multi-Cluster Shared Network CN2

SUMMARY See examples on how to install multi-cluster CN2 in a deployment where Kubernetes traffic and CN2 traffic share the same network within each cluster.

In a multi-cluster shared network deployment:

  • CN2 is the central networking platform and CNI plug-in for multiple distributed workload clusters. The Contrail controller runs in the Kubernetes control plane in the central cluster, and the Contrail data plane components run on the worker nodes in the distributed workload clusters.

  • Kubernetes and CN2 traffic within each cluster share a single network.

Figure 1 shows the cluster you'll create if you follow the multi-cluster setup. The central cluster consists of 3 Kubernetes control plane nodes that run the Contrail controller. This centralized Contrail controller provides the networking for distributed workload clusters. In this example, there is one distributed cluster that consists of a single control plane node and two worker nodes. The worker nodes on the distributed workload cluster contain the Contrail data plane components.

Figure 1: Multi-Cluster CN2 Multi-Cluster CN2

The central cluster attaches to the 172.16.0.0/24 network while the distributed workload cluster attaches to the 10.16.0.0/24 network. A gateway sitting between the networks provides access to each other and external access for downloading images from Juniper Networks repositories.

The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.

Note:

Connecting all cluster nodes together is the data center fabric, which is simplified in the example into a single subnet per cluster. In real installations, the data center fabric is a network of spine and leaf switches that provide the physical connectivity for the cluster.

In an Apstra-managed data center, this connectivity would be specified through the overlay virtual networks that you create across the underlying fabric switches.

To install CN2 in a multi-cluster deployment, you first create the central cluster and then you attach the distributed workload clusters to the central cluster one by one. As with the single-cluster deployment, you'll start with a fresh cluster with no CNI plug-in installed and then you'll install CN2 on it.

The procedures in this section show basic examples of how you can use the provided manifests to create the specified CN2 deployment. You're not limited to the deployment described in this section nor are you limited to using the provided manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest for your specific situation.

Install Multi-Cluster Shared Network CN2 in Release 23.1

Use this procedure to install CN2 in a multi-cluster shared network deployment running a kernel mode data plane in release 23.1.

The manifest that you will use in this example procedure is multi-cluster/central_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create the central cluster.

    Follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:

    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.

    Tailor the procedure with the desired number of control plane and worker nodes accordingly.

  2. Install CN2 on the central cluster.
    1. Apply the central cluster manifest (central_cluster_deployer_example.yaml). This manifest creates the namespaces and other resources required by the central cluster. It also creates the contrail-k8s-deployer deployment, which deploys CN2 and provides life cycle management for the CN2 components.
    2. Check that all pods are now up. This might take a few minutes.
    You've now created the central cluster.
  3. Follow Attach a Workload Cluster in Release 23.1 to create and attach a distributed workload cluster to the central cluster.
  4. Repeat step 3 for every workload cluster you want to create and attach.
  5. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.
    Note:

    Run postflight checks from the central cluster only.