Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

CN2 Pipeline

SUMMARY Juniper Cloud-Native Contrail® Networking™ (CN2) pipeline supports GitOps-based network configuration management. Use this document to review the pipeline GitOps configuration and install the pipeline in environments using CN2 Release 23.1 or later.

CN2 Pipeline Overview

GitOps is a deployment methodology centralized around a Git repository, where the GitOps workflow pushes a configuration through testing, staging, and production. Many customers run a staging environment or staging cluster. Supporting GitOps for CN2 allows automatic configuration deployment and testing of CN2 network configurations using parameters in test case YAML files.

After configuring the CN2 pipeline and the GitOps tool, CN2:

  • Syncs with the GitOps repository and auto-provisions CN2 configurations across multiple Kubernetes clusters.

  • Provisions with the capability to test and verify the deployed CN2 configurations in each Kubernetes cluster.

  • Provides auto-revision monitoring and updates.

CN2 Configuration

In CN2, the configurations are Kubernetes Custom Resource Definitions (CRDs), written in YAML or JSON format. These CRDs are stored and managed in the Git repository, which makes the Git repository the source of truth for all of the network configurations.

See CN2 Sample Configurations.

GitOps

"GitOps is a paradigm or a set of practices that empowers developers to perform tasks which typically fall under the purview of IT operations. GitOps requires us to describe and observe systems with declarative specifications that eventually form the basis of continuous everything." Quote is from CloudBees.

In order to achieve the GitOps mode of operation, the Argo CD application is used. The CN2 Git repository is configured to be used as the Argo CD application.

The GitOps engine watches for changes in the Git repository and applies the changes to Kubernetes whenever there are any changes made to the system. The GitOps engine also runs a repository server that caches all the applications files from the Git repository for verification against the Git repository to monitor for any changes being made in the Git repository.

Figure 1: Argo CD with Git Repository and Kubernetes Argo CD with Git Repository and Kubernetes

CN2 and GitOps

The primary benefit of supporting GitOps for CN2 is to achieve automatic configuration deployment and testing of CN2 network configurations. CN2 configurations are custom resource definitions (CRDs) which are maintained in a Git repository. These CRDs are applied to the CN2 cluster whenever there is a change to the CN2 configurations present in the Git repository. In order to test and apply these changes, the GitOps applications Argo CD and Argo Workflows are deployed.

Figure 2: GitOps Pipeline Workflow GitOps Pipeline Workflow

CN2 Pipeline Configuration Flow

Your CN2 configurations are maintained in the CN2 Git repository. The CN2 Git repository is configured to be used as the Argo CD application. The CN2 pipeline configuration flow is:

  1. CN2 configurations are initially pushed to the staging repository by you (the administrator).
  2. Any changes to the configurations present in the repository triggers a Git synchronization to the GitOps server.

  3. The GitOps server looks for any changes required by comparing the current configuration and the new configuration fetched from the CN2 Git repository.

  4. If any new changes are pushed, the GitOps server applies these new changes to the CN2 environment.

Figure 3: CN2 Configurations in Customer Git Repository CN2 Configurations in Customer Git Repository

GitOps Server

The GitOps server confirms the configuration in the CN2 environment is always synchronized with what is present in the Git repository. CN2 pipeline supports two branches:

  • One for the staging environment.
  • One for the production environment.

The staging branch is the branch where you push any configurations required to be pushed to the staging CN2 cluster. These configurations are then tested by the workflow engine before the configurations are merged to the production branch.

Figure 4: GitOps Server GitOps Server

Workflow and Tests

The GitOps server pushes all of the changes to the CN2 setup.

  1. This push triggers the workflow cycle to run test cases to validate and verify the CN2 setup against the configurations that were applied in the staging setup.

  2. With successful performance of the test cases, you are notified about the test completion and a merge request is presented to the production branch.

  3. At this point, you need to validate the changes in the merge request and approve that the changes be merged to the production branch.

  4. After the new configurations are merged to the production branch, the GitOps server synchronizes the configurations and applies the configurations to the CN2 production cluster.

Figure 5: Workflow and Tests Workflow and Tests

Pipeline Installation and Setup

Components

The pipeline components are:

  • Git Server

  • Argo CD

  • Argo Workflow

  • Argo Events

  • CN2 Pipeline Service

  • CN2 Solution Test Suite

CN2 Components

All CN2 pipeline components are installed and configured as part of the CN2 Pipeline Helm chart installation. Argo CD is installed as part of this initial system setup in an external cluster (cluster where CN2 is not running). Argo CD is configured with the following details during the initial setup:

  • CN2 cluster environment details

  • Git repository access details

  • CN2 GitOps engine application configuration

See Install CN2 Pipeline Helm Chart.

Kubernetes

You can use any native Kubernetes or Red Hat OpenShift with CN2 or another Container Network Interface (CNI) to provision the CN2 pipeline.

Prerequisites

Prerequisites for installing the CN2 pipeline are:

  • Management Kubernetes cluster

  • CN2 Kubernetes cluster

  • Connectivity from the management Kubernetes cluster to the CN2 cluster

  • GitLab repository CN2 configuration folder with sample ConfigMap file

  • Connectivity from the management Kubernetes cluster to outside to access Argo CD, Argo workflow, and test results

    Note:

    Install ingress from the files/ingress/openshift/public on the OpenShift cluster.

    Contrail pipeline needs GitLab or GitLab Open Source as event source.

Download CN2 Pipeline

To download the CN2 Pipeline tar file:

  1. Download CN2 Pipelines Deployer files from Juniper Networks Downloads.
  2. Untar the downloaded files to the management server.

Create Service Account and Token in Kubernetes

Creating a service account and token are important to give the Kubernetes cluster access within and outside of the CN2 pipeline using APIs. This topic describes how to create the service account, token, role, and role bindings.

Note:

Throughout these procedures, cn2pipelines is used as an example.

For Kubernetes Version 1.23 or Earlier

Perform these steps on the CN2 cluster.

To create a service account and token:

  1. Create the namespace if one does not already exist.

  2. Create a service account named cn2pipelines.

  3. Run the describe command to fetch the token.

    Output:

    By default, Kubernetes version 1.23 or earlier creates the token as a secret by default when you create the service account.

  4. For CN2 with Kubernetes, retrieve the Mountable secrets from the Step 3 output. Run the describe secret command to get the bearerToken for the service account. The bearerToken is needed when you update the values.yaml file.

    Output:

  5. For CN2 with Red Hat OpenShift, you use the image to pull secrets as a secret value and then run the following command to get the bearerToken.

    1. Get the bearerToken.

      Output:

  6. Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.

    1. Create a ClusterRole and name the file clusterrole-cn2pipelines.yaml.

    2. Apply the clusterrole-cn2pipelines.yaml file that you just created.

    3. Create a ClusterRoleBinding file and name the file clusterrolebinding-cn2pipelines.yaml.

    4. Apply the clusterrolebinding-cn2pipelines.yaml file that you just created.

    You now have full permissions to all resources in cluster.

For Kubernetes Version 1.24 or Later

For Kubernetes version 1.24 and later, creating a service account does not create a secret automatically. Use the following procedure to manually create a token for that service account.

  1. Create the namespace if one does not already exist.

  2. Create a service account named cn2pipelines.

  3. Create a token for the service account cn2pipelines.

  4. Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.

    1. Create a ClusterRole and name the file clusterrole-cn2pipelines.yaml.

    2. Apply the clusterrole-cn2pipelines.yaml file that you just created.

    3. Create a ClusterRoleBinding file and name the file clusterrolebinding-cn2pipelines.yaml.

    4. Apply the clusterrolebinding-cn2pipelines.yaml file that you just created.

    Your token is created and you now have full permissions to all resources in cluster.

  5. Run the describe command to fetch the token.

    Output:

  6. Retrieve the Mountable secrets from the Step 5 output. Run the describe secret command to get the bearerToken for the service account.

    Output:

Apply Ingress and Update /etc/hosts for OpenShift Deployment Type

Perform these steps on the CN2 cluster.

  1. Check that /etc/hosts contains entries from the OpenShift cluster. For example:

  2. Run the following command to locate the files to apply.

  3. Apply the following YAML files.

  4. Wait for all the pods to come up, then apply these two YAMLs.

The /etc/hosts for OpenShift is updated.

Install Helm

Before installing the CN2 pipeline chart, you need to install Helm 3 in the management cluster. Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Run the following command to download and install the latest version of Helm 3.

Install CN2 Pipeline Helm Chart

The CN2 pipeline Helm chart is used to install and configure the CN2 pipeline management cluster.

To install the CN2 pipeline Helm chart on your management cluster:

  1. In your downloaded CN2 Pipelines Deployer files, locate the values.yaml in folder contrail-pipelines-x.x.x/values.
  2. Input the chart values. For values.yaml parameters descriptions, see Explanation of Parameters for values.yaml.

    Example values.yaml for the management cluster:

  3. Run the following command to install the CN2 pipeline Helm Chart with the release name cn2-pipeline:

Verify CN2 Pipeline Helm Chart Installation

To verify the CN2 Pipeline Helm Chart Installation, run the following commands:

  1. List the Helm release in the current namespace.

    Output:

  2. Display all pods in all namespaces.

    Output:

Argo CD and Helm Configuration

This topic lists the Argo components and configurations that are automated as part of the CN2 pipeline Helm chart install.

  • Argo CD External Service—Creates a Kubernetes service with service type as NodePort or LoadBalancer. This creates the Argo CD external service, providing access to the Argo CD API server and the Argo CD GUI.

  • Register Git Repository with CN2 Configurations—Configures repository credentials and connects your Git repository to Argo CD. Argo CD is configured to your Git repository in order to watch and pull the configuration changes from your Git repository. This Git repository should only contain Kubernetes resources. Argo CD does not understand any other type of YAML or files.

  • Register Kubernetes Clusters —Registers a Kubernetes cluster to Argo CD. This process configures Argo CD to provision the Kubernetes resources in any Kubernetes cluster. Multiple Kubernetes clusters can be configured in Argo CD.

  • Create an Argo CD Application—Creates an application using the Argo CD GUI. Any application created in Argo CD needs to be associated with a Git repository and one Kubernetes cluster.

Argo Log In

After installing the CN2 pipeline Helm chart, you have access to the Argo Workflow GUI and the Argo CD GUI.

Access Argo Workflow UI

To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30550.

  1. Access the Argo CD GUI from your browser.

  2. On the management node, run the following command to receive the token.

Access Argo CD GUI

To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30551.

  1. Access the Argo CD GUI from your browser.

  2. On the management node, run the following command to receive the token. The username is admin.

CN2 and Workflows

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. You can define workflows where each step in the workflow is a container. And you can model multi-step workflows as a sequence of tasks or capture dependencies between tasks using a directed acyclic graph (DAG).

Why Workflows Are Needed

Workflows are used to invoke and run CN2 test cases after provisioning CN2 resources by using the GitOps engine. These workflows qualify the CN2 application configurations and generates test results for the configuration that is being deployed.

How Workflows Work and How CN2 Uses Workflows

Workflows are triggered whenever a CN2 resource is provisioned by the GitOps engine. Each of the CN2 resources or a group of CN2 resources are mapped to a specific workflow test DAG. After successful completion of these test suites, the CN2 configurations are qualified to be promoted to production environments from the Staging or Test environments.

CN2 Pipeline Service

The pipeline service listens for notifications from Argo Events for any changes in Kubernetes resources. The pipeline service exposes a service which is used by Argo Events to consume and trigger the service with the data related to the CN2 configuration that you applied. It is the responsibility of the CN2 pipeline service to identify the test workflow to be triggered for the type of CN2 configuration that you applied. Workflows change dynamically depending on the objects being notified. The CN2 pipeline listener service invokes the respective workflow dependent on the CN2 configuration that gets applied.

CN2 Pipeline Configurations

This topic shows examples for the CN2 pipeline configurations.

Pipeline Configuration

The pipeline configuration is used by the pipeline engine and includes:

  • Pipeline commit threshold

  • Config map: cn2pipeline-configs

  • Namespace: argo-events

Example pipeline configuration:

Test Workflow Template Parameter Configuration

All workflow template inputs are stored as configuration maps. These configuration maps are dynamically selected during the execution by the pipeline service.

Workflow to Kind Map

This mapping configuration contains the workflow template to CN2 resource kind mapping. Only one template is selected for execution and the first map that matches has the higher priority. An asterisk (*) in kind: ['*'] has higher priority than any other kind matches and overrides every mapping.

A workflow template for a CN2 resource kind mapping template includes:

  • Config map: cn2tmpl-to-kind-map

  • Namespace: argo-events

Following is an example configuration for the workflow template to CN2 resource kind mapping. Note the asterisk (*) in kind: ['*'] kindmap.

Kubeconfig Secret for the CN2 Cluster

A base64 encoded kubeconfig for the CN2 cluster is created as a secret. Kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster.

  • Secret: cn2-cluster-kubeconfig

  • Namespace: argo-events

Note:

Do not change the cn2-cluster-kubeconfig name.

Following is an example kubeconfig secret for CN2 cluster:

Create Custom Workflows for CN2 Pipeline

You can create custom workflows tests to test your container network functions (CNFs).

In order to create a custom workflow, you can use the example custom test workflow templates provided with the CN2 pipeline files. Every workflow has a set of input parameters, volume mounts, container creation, and so on. To understand the workflow template creation see Argo Workflows.

The following example custom test workflow templates are provided:

  • Input parameters to workflow

  • Mount volumes

  • Create Kubernetes resource using workflow (Template name: create-cnf-tf and create-cnf-service-tf)

  • Embedded code in workflow (Template name: test-access-tf)

  • Pull external code and execute within a container (Template name: test-service-tf)

In order to automate the inputs to the workflow during the pipeline run, a workflow parameter configuration map is created which has the inputs for the workflow. The configuration map needs to have the same name as the workflow template. In the following example, the template name is custom-cnf-sample-test and a configuration map is created automatically with the same name. As a part of the pipeline run, the pipeline service looks for the configuration map with the template name, gets the inputs, which are then automatically added to the workflow when the pipeline triggers the workflow.

Another update that happens in the test case which triggers the custom workflow is to change the configuration map name to cn2tmpl-to-kind-map.

The following is an example workflow configuration.

custom-cnf-sample-test