Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Single Cluster Shared Network CN2

SUMMARY See examples on how to install single cluster CN2 in a deployment where Kubernetes traffic and CN2 traffic share the same network.

In a single cluster shared network deployment:

  • CN2 is the networking platform and CNI plug-in for that cluster. The Contrail controller runs in the Kubernetes control plane, and the Contrail data plane components run on all nodes in the cluster.

  • Kubernetes and CN2 traffic share a single network.

Figure 1 shows the cluster that you'll create if you follow the single cluster shared network example. The cluster consists of a single control plane node and two worker nodes.

All nodes shown can be VMs or bare metal servers.

Figure 1: Single Cluster Shared Network CN2 Single Cluster Shared Network CN2

All communication between nodes in the cluster and between nodes and external sites takes place over the single 172.16.0.0/24 fabric virtual network. The fabric network provides the underlay over which the cluster runs.

The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.

Note:

Connecting all cluster nodes together is the data center fabric, which is shown in the example as a single subnet. In real installations, the data center fabric is a network of spine and leaf switches that provide the physical connectivity for the cluster.

In an Apstra-managed data center, this connectivity would be specified through the overlay virtual networks that you create across the underlying fabric switches.

The procedures in this section show basic examples of how you can use the provided manifests to create the specified CN2 deployment. You're not limited to the deployment described in this section nor are you limited to using the provided manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Install Single Cluster Shared Network CN2 Running Kernel Mode Data Plane

Use this procedure to install CN2 in a single cluster shared network deployment running a kernel mode data plane.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
  2. Apply the Contrail deployer manifest.

    It may take a few minutes for the nodes and pods to come up.

  3. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check that /etc/resolv.conf is linked to /run/systemd/resolve/resolv.conf as described in step 5 in Before You Install and check that your DNS server is listed correctly in that file.

    4. If you run into a problem you can't solve or if you made a mistake during the installation, simply uninstall CN2 and start over. To uninstall CN2, see Uninstall CN2.
  4. (Optional) Run postflight checks. See Run Preflight and Postflight Checks.

Install Single Cluster Shared Network CN2 Running DPDK Data Plane

Use this procedure to install CN2 in a single cluster shared network deployment running a DPDK data plane.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
    • Enable multus version 0.3.1.
  2. Specify the DPDK nodes.
    For each node running DPDK, label it as follows:By labeling the nodes in this way, CN2 will use the DPDK configuration specified in the manifest.
  3. Apply the Contrail deployer manifest.

    It may take a few minutes for the nodes and pods to come up.

  4. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check that /etc/resolv.conf is linked to /run/systemd/resolve/resolv.conf as described in step 5 in Before You Install and check that your DNS server is listed correctly in that file.

    4. If you run into a problem you can't solve or if you made a mistake during the installation, simply uninstall CN2 and start over. To uninstall CN2, see Uninstall CN2.
  5. (Optional) Run postflight checks. See Run Preflight and Postflight Checks.