Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Single Cluster CN2 on Amazon EKS

SUMMARY See examples on how to install single cluster CN2 on Amazon EKS.

In a single cluster deployment, CN2 is the networking platform and CNI plug-in for that cluster. Figure 1 shows an Amazon EKS cluster with three worker nodes running the Contrail controller. The Amazon EKS control plane communicates with worker nodes in the user VPC over an Elastic Network Interface (ENI). In a typical deployment, there would be additional worker nodes that run the user workloads.

Figure 1: CN2 on Amazon EKS CN2 on Amazon EKS

The procedures in this section show basic examples of how you can use the provided Amazon EKS blueprints, Helm charts, and YAML manifests to install CN2 on an Amazon EKS cluster. We cover both installing CN2 in a brand new cluster and in an existing cluster.

You're not limited to the deployment described in these sections nor are you limited to using the provided files and manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Install Single Cluster CN2 Using Amazon EKS Blueprints in Release 23.1

Use this procedure to install CN2 using Amazon EKS blueprints for Terraform in release 23.1.

The blueprint that we provide performs the following:

  • creates a new sample VPC, 3 private subnets, and 3 public subnets

  • creates Internet gateway for public subnets and NAT gateway for private subnets

  • creates EKS Cluster control plane with one managed node group (desired nodes set to 3)

  • deploys CN2 as Amazon EKS cluster CNI

  1. Clone the AWS Integration and Automation repository. This is where the Terraform manifests are stored.
  2. Add your enterprise-hub.juniper.net access credentials to terraform-aws-eks-blueprints/examples/eks-cluster-with-cn2/variables.tf for the container_pull_secret variable .
    The credentials that you add must be base64-encoded. See Configure Repository Credentials for an example of how to obtain and encode your credentials.
  3. Run terraform init. This command initializes a working directory containing Terraform configuration files.
  4. Run terraform plan. This command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
    Verify the resources created by this execution.
  5. Run terraform apply. This command executes the Terraform plan you just created.
    Enter yes to apply and create the cluster.
  6. Obtain the cluster name and other details of your new Amazon EKS cluster from the Terraform output or from the AWS Console.
  7. Copy the kubeconfig onto your local computer.
  8. Check over your new cluster.
    List your worker nodes:List all the pods:
  9. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.
  10. If you run into problems, clean up the cluster and try the installation again.
    To clean up the cluster, destroy the Kubernetes addons, the Amazon EKS cluster, and the VPC. You must run these terraform commands in the examples/eks-cluster-with-cn2 directory.Then destroy any remaining resources:

Install Single Cluster CN2 Using Helm Charts in Release 23.1

Use this procedure to install CN2 on an existing Amazon EKS cluster using Helm charts in release 23.1. In this example, the existing Amazon EKS cluster is running the VPC CNI.

  1. Add the Juniper Networks CN2 Helm repository.
  2. Install CN2.
    See Configure Repository Credentials for one way to get your credentials.
  3. Use standard kubectl commands to check on the installation.
    Check that the nodes are up. If the nodes are not up, wait a few minutes and check again.

    Check that the pods have a STATUS of Running. If not, wait a few minutes for the pods to come up.

  4. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.

Install Single Cluster CN2 Using YAML Manifests in Release 23.1

Use this procedure to install CN2 using YAML manifests in release 23.1.

We use eksctl to create a cluster in this example, but you can use any other method as long as you remember to remove the CNI.

The manifests that you will use in this example procedure are amazon-eks/single-cluster/single_cluster_deployer_example.yaml and amazon-eks/single-cluster/cert-manager.yaml. The procedure assumes that you've placed these manifests into a manifests directory.

  1. Create an EKS cluster without a node group.
    Take note of the service IP address subnet. You'll need this in a later step. By default, Amazon EKS assigns service IP addresses from either the 10.100.0.0/16 or the 172.20.0.0/16 CIDR blocks.
  2. Configure the service IP address subnet for the Contrail kubemanager. This subnet must match the service IP address subnet of the cluster.
    Edit the single_cluster_deployer_example.yaml manifest and look for the serviceV4Subnet configuration in the Kubemanager section.Change the subnet as necessary to match the service IP address subnet of the cluster.
  3. If desired, specify the three nodes where you want to run the Contrail controller.
    By default, the supplied manifest contains tolerations that allow the Contrail controller to tolerate any taint. This means that the Contrail controller will install on any node. Use node selectors (or node affinity) to force the Contrail controller to install on the nodes that you want. Then taint those nodes to prevent other pods from being scheduled there. Repeat for the other two nodes.
  4. Apply the cert-manager manifest. The cert-manager provides encryption for all CN2 management and control plane connections.
  5. Apply the Contrail deployer manifest.
  6. Attach managed or self-managed worker nodes running Amazon EKS-optimized AMI to the cluster.
    Ensure you pick an AMI that is running a kernel that is supported by CN2.
  7. (Optional) Install Contrail tools and run preflight checks. See Run Preflight and Postflight Checks in Release 23.1.
    Correct any errors before proceeding.
  8. Use standard kubectl commands to check on the deployment.
    Check that the nodes are up. If the nodes are not up, wait a few minutes and check again.

    Check that the pods have a STATUS of Running. If not, wait a few minutes for the pods to come up.

  9. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.