Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install CN2 on Amazon EKS

SUMMARY See examples on how to install single cluster CN2 on Amazon EKS.

In a single cluster deployment, CN2 is the networking platform and CNI plug-in for that cluster. The Contrail controller and the Contrail data plane components run on worker nodes in the cluster.

Figure 1 shows a cluster of three worker nodes running the Contrail controller in an Amazon EKS cluster. The control plane nodes are managed by AWS and are not under user control.

Figure 1: CN2 on Amazon EKS CN2 on Amazon EKS

All communication between nodes in the cluster and between nodes and external sites takes place over the AWS network, in the same manner as a standard Amazon EKS cluster.

The procedures in this section show basic examples of how you can use the provided manifests to create the specified CN2 deployment. You're not limited to the deployment described in this section nor are you limited to using the provided manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Table 1: Single Cluster Examples
Release Kernel Mode Data Plane DPDK Data Plane
22.4 Install Single Cluster CN2 Running Kernel Mode Data Plane in Release 22.4 Not supported
Note:

The provided manifests might not be compatible between releases. Make sure you use the manifests for the release that you're running. In practice, this means that you should not modify the image tag in the supplied manifests.

Install Single Cluster CN2 Running Kernel Mode Data Plane in Release 22.4

Use this procedure to install CN2 in an Amazon EKS cluster running a kernel mode data plane in release 22.4.

This example procedure uses Terraform to deploy the following basic Amazon EKS Cluster with VPC:

  • creates a new sample VPC, 3 private subnets, and 3 public subnets

  • creates Internet gateway for public subnets and NAT gateway for private subnets

  • creates EKS Cluster control plane with one managed node group (desired nodes set to 3)

  • deploys CN2 as Amazon EKS cluster CNI

  1. Clone the AWS Integration and Automation repository. This is where the Terraform manifests are stored.
  2. Add your enterprise-hub.juniper.net access credentials to terraform-aws-eks-blueprints/examples/eks-cluster-with-cn2/main.tf .
    The credentials that you add must be base64-encoded. See Configure Repository Credentials for an example of how to obtain and encode your credentials, and apply it to this Terraform manifest.
  3. Run terraform init. This command initializes a working directory containing Terraform configuration files.
  4. Run terraform plan. This command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
    Verify the resources created by this execution.
  5. Run terraform plan. This command executes the Terraform plan you just created.
    Enter yes to apply and create the cluster.
  6. Obtain the cluster name and other details of your new Amazon EKS cluster from the Terraform output or from the AWS Console.
  7. Copy the kubeconfig onto your local computer.
  8. Check over your new cluster.
    List your worker nodes:List all the pods:
  9. Remove the aws-node daemonset. This daemonset installs vpc-cni on the worker nodes, which is not needed for this procedure.
  10. Log in to each worker node to remove the /etc/cni/net.d/10-aws.conflist file and reboot.