Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Single Cluster Multi-Network CN2

SUMMARY See examples on how to install single cluster CN2 in a deployment where Kubernetes traffic and CN2 traffic go over separate networks.

In a single cluster multi-network deployment:

  • CN2 is the networking platform and CNI plug-in for that cluster. The Contrail controller runs in the Kubernetes control plane, and the Contrail data plane components run on all nodes in the cluster.

  • Cluster traffic is separated onto two networks. The Kubernetes control plane traffic traverses one network while Contrail control and data traffic traverse the second network. It's also possible (but less common) to separate traffic onto more than two networks, but this is beyond the scope of these examples.

Figure 1 shows the cluster that you'll create if you follow this single cluster multi-network example. The cluster consists of a single control plane node, two worker nodes, and two subnets.

All nodes shown can be VMs or bare metal servers.

Figure 1: Single Cluster Multi-Network CN2 Single Cluster Multi-Network CN2

Kubernetes control plane traffic goes over the 172.16.0.0/24 fabric virtual network while Contrail control and data traffic go over the 10.16.0.0/24 fabric virtual network. The fabric networks provide the underlay over which the cluster runs.

The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.

Note:

Connecting all cluster nodes together is the data center fabric, which is shown in the example as two subnets. In real installations, the data center fabric is a network of spine and leaf switches that provide the physical connectivity for the cluster.

In an Apstra-managed data center, this connectivity would be specified through the overlay virtual networks that you create across the underlying fabric switches.

The procedures in this section show basic examples of how you can use the provided manifests to create the specified CN2 deployment. You're not limited to the deployment described in this section nor are you limited to using the provided manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Install Single Cluster Multi-Network CN2 Running Kernel Mode Data Plane in Release 23.1

Use this procedure to install CN2 in a single cluster multi-network deployment running a kernel mode data plane in release 23.1.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
  2. Modify the single_cluster_deployer_example.yaml to configure the Contrail control and data network.

    You specify the Contrail network using a contrail-network-config ConfigMap. The single_cluster_deployer_example.yaml manifest contains a commented example on how you can configure a contrail-network-config ConfigMap.

    Either uncomment those lines and specify the appropriate subnet and gateway or copy and paste the following into the manifest.

    The subnet and gateway you specify is the Contrail control and data network and gateway, which in our example is the 10.16.0.0/24 network.
  3. Apply the Contrail deployer manifest.

    It may take a few minutes for the nodes and pods to come up.

  4. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check that /etc/resolv.conf is linked to /run/systemd/resolve/resolv.conf as described in step 5 in Before You Install and check that your DNS server is listed correctly in that file.

    4. If you run into a problem you can't solve or if you made a mistake during the installation, simply uninstall CN2 and start over. To uninstall CN2, see Uninstall CN2 in Release 23.1.
  5. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.

Install Single Cluster Multi-Network CN2 Running DPDK Data Plane in Release 23.1

Use this procedure to install CN2 in a single cluster multi-network deployment running a DPDK data plane in release 23.1.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
    • Enable multus version 0.3.1.
  2. Modify the single_cluster_deployer_example.yaml to configure the Contrail control and data network.

    You specify the Contrail network using a contrail-network-config ConfigMap. The single_cluster_deployer_example.yaml manifest contains a commented example on how you can configure a contrail-network-config ConfigMap.

    Either uncomment those lines and specify the appropriate subnet and gateway or copy and paste the following into the manifest.

    The subnet and gateway you specify is the Contrail control and data network and gateway, which in our example is the 10.16.0.0/24 network.
  3. Specify the DPDK nodes.
    For each node running DPDK, label it as follows:By labeling the nodes in this way, CN2 will use the DPDK configuration specified in the manifest.
  4. Apply the Contrail deployer manifest.

    It may take a few minutes for the nodes and pods to come up.

  5. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check that /etc/resolv.conf is linked to /run/systemd/resolve/resolv.conf as described in step 5 in Before You Install and check that your DNS server is listed correctly in that file.

    4. If you run into a problem you can't solve or if you made a mistake during the installation, simply uninstall CN2 and start over. To uninstall CN2, see Uninstall CN2 in Release 23.1.
  6. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.