ON THIS PAGE
CN2 Pipeline
SUMMARY Juniper Cloud-Native Contrail® Networking™ (CN2) pipeline supports GitOps-based network configuration management. Use this document to review the pipeline GitOps configuration and install the pipeline in environments using CN2 Release 23.1 or later.
CN2 Pipeline Overview
GitOps is a deployment methodology centralized around a Git repository, where the GitOps workflow pushes a configuration through testing, staging, and production. Many customers run a staging environment or staging cluster. Supporting GitOps for CN2 allows automatic configuration deployment and testing of CN2 network configurations using parameters in test case YAML files.
After configuring the CN2 pipeline and the GitOps tool, CN2:
-
Syncs with the GitOps repository and auto-provisions CN2 configurations across multiple Kubernetes clusters.
-
Provisions with the capability to test and verify the deployed CN2 configurations in each Kubernetes cluster.
-
Provides auto-revision monitoring and updates.
CN2 Configuration
In CN2, the configurations are Kubernetes Custom Resource Definitions (CRDs), written in YAML or JSON format. These CRDs are stored and managed in the Git repository, which makes the Git repository the source of truth for all of the network configurations.
See CN2 Sample Configurations.
GitOps
"GitOps is a paradigm or a set of practices that empowers developers to perform tasks which typically fall under the purview of IT operations. GitOps requires us to describe and observe systems with declarative specifications that eventually form the basis of continuous everything." Quote is from CloudBees.
In order to achieve the GitOps mode of operation, the Argo CD application is used. The CN2 Git repository is configured to be used as the Argo CD application.
The GitOps engine watches for changes in the Git repository and applies the changes to Kubernetes whenever there are any changes made to the system. The GitOps engine also runs a repository server that caches all the applications files from the Git repository for verification against the Git repository to monitor for any changes being made in the Git repository.
CN2 and GitOps
The primary benefit of supporting GitOps for CN2 is to achieve automatic configuration deployment and testing of CN2 network configurations. CN2 configurations are custom resource definitions (CRDs) which are maintained in a Git repository. These CRDs are applied to the CN2 cluster whenever there is a change to the CN2 configurations present in the Git repository. In order to test and apply these changes, the GitOps applications Argo CD and Argo Workflows are deployed.
CN2 Pipeline Configuration Flow
Your CN2 configurations are maintained in the CN2 Git repository. The CN2 Git repository is configured to be used as the Argo CD application. The CN2 pipeline configuration flow is:
- CN2 configurations are initially pushed to the staging repository by you (the administrator).
-
Any changes to the configurations present in the repository triggers a Git synchronization to the GitOps server.
-
The GitOps server looks for any changes required by comparing the current configuration and the new configuration fetched from the CN2 Git repository.
-
If any new changes are pushed, the GitOps server applies these new changes to the CN2 environment.
GitOps Server
The GitOps server confirms the configuration in the CN2 environment is always synchronized with what is present in the Git repository. CN2 pipeline supports two branches:
- One for the staging environment.
- One for the production environment.
The staging branch is the branch where you push any configurations required to be pushed to the staging CN2 cluster. These configurations are then tested by the workflow engine before the configurations are merged to the production branch.
Workflow and Tests
The GitOps server pushes all of the changes to the CN2 setup.
-
This push triggers the workflow cycle to run test cases to validate and verify the CN2 setup against the configurations that were applied in the staging setup.
-
With successful performance of the test cases, you are notified about the test completion and a merge request is presented to the production branch.
-
At this point, you need to validate the changes in the merge request and approve that the changes be merged to the production branch.
-
After the new configurations are merged to the production branch, the GitOps server synchronizes the configurations and applies the configurations to the CN2 production cluster.
Pipeline Installation and Setup
Components
The pipeline components are:
-
Git Server
-
Argo CD
-
Argo Workflow
-
Argo Events
-
CN2 Pipeline Service
-
CN2 Solution Test Suite
CN2 Components
All CN2 pipeline components are installed and configured as part of the CN2 Pipeline Helm chart installation. Argo CD is installed as part of this initial system setup in an external cluster (cluster where CN2 is not running). Argo CD is configured with the following details during the initial setup:
-
CN2 cluster environment details
-
Git repository access details
-
CN2 GitOps engine application configuration
Kubernetes
You can use any native Kubernetes or Red Hat OpenShift with CN2 or another Container Network Interface (CNI) to provision the CN2 pipeline.
Prerequisites
Prerequisites for installing the CN2 pipeline are:
-
Management Kubernetes cluster
-
CN2 Kubernetes cluster
-
Connectivity from the management Kubernetes cluster to the CN2 cluster
-
GitLab repository CN2 configuration folder with sample ConfigMap file
-
Connectivity from the management Kubernetes cluster to outside to access Argo CD, Argo workflow, and test results
Note:Install ingress from the files
/ingress/openshift/public
on the OpenShift cluster.Contrail pipeline needs GitLab or GitLab Open Source as event source.
Download CN2 Pipeline
To download the CN2 Pipeline tar file:
- Download CN2 Pipelines Deployer files from Juniper Networks Downloads.
- Untar the downloaded files to the management server.
Create Service Account and Token in Kubernetes
- For Kubernetes Version 1.23 or Earlier
- For Kubernetes Version 1.24 or Later
- Apply Ingress and Update /etc/hosts for OpenShift Deployment Type
Creating a service account and token are important to give the Kubernetes cluster access within and outside of the CN2 pipeline using APIs. This topic describes how to create the service account, token, role, and role bindings.
Throughout these procedures, cn2pipelines
is used as an
example.
- For Kubernetes Version 1.23 or Earlier
- For Kubernetes Version 1.24 or Later
- Apply Ingress and Update /etc/hosts for OpenShift Deployment Type
For Kubernetes Version 1.23 or Earlier
Perform these steps on the CN2 cluster.
To create a service account and token:
-
Create the namespace if one does not already exist.
kubectl create ns cn2pipelines
-
Create a service account named
cn2pipelines
.kubectl create sa cn2pipelines -n cn2pipelines
-
Run the describe command to fetch the token.
kubectl describe sa cn2pipelines -n cn2pipelines
Output:
Name: cn2pipelines Namespace: cn2pipelines Labels: <none> Annotations: <none> Image pull secrets: <none> Mountable secrets: cn2pipelines-token-5szb6 Tokens: cn2pipelines-token-5szb6 Events: <none>
By default, Kubernetes version 1.23 or earlier creates the token as a secret by default when you create the service account.
-
For CN2 with Kubernetes, retrieve the
Mountable secrets
from the Step 3 output. Run the describe secret command to get thebearerToken
for the service account. ThebearerToken
is needed when you update thevalues.yaml
file.kubectl describe secret cn2pipelines-token-5szb6 -n cn2pipelines
Output:
Name: cn2pipelines-token-5szb6 Namespace: cn2pipelines Labels: <none> Annotations: kubernetes.io/service-account.name: cn2pipelines kubernetes.io/service-account.uid: e5059023-7269-482a-870c-5c4ff175ba00 Type: kubernetes.io/service-account-token Data ==== namespace: 10 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZQMkxOcWlOQjg5MElySUtiWGpPTWJVVGZNR3FQS3hnUDhyTDFHZjd3VFkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaS1qZW5raW5zIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpLWplbmtpbnMtdG9rZW4tNXN6YjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ktamVua2lucyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImU1MDU5MDIzLTcyNjktNDgyYS04NzBjLTVjNGZmMTc1YmEwMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjaS1qZW5raW5zOmNpLWplbmtpbnMifQ.DeAySlkf7dW6xzUH5bLeXc2lRPa_RMZ2bG4zGktpHyA2eDdM-nliCTpwhuBPbZ2fNeiaZb3Tl8h-MJNF7IygwXEHjW8ALfvUv4nBnmSMj9JW44PoPeMSCAnrtIXucy8hcGZN4K6i1w2n6ASSYAXyifwMOLy3-KfbY9PYErOb0eC34-cHkP-TQoV0o4ncA58kwOwut2DmkIKfH3gsOAY445wO4_WUeYuqO_JU0uQpyPaCRO9sLDhMlVcnp0TI7hvZu_DbVyRhy4b8QqJEj3h08j0lPGvFhvmCcUqTSLXbVtV9o62cqhd1q9pcFq5yAxmYpuwWjkOP8KuIsf71U070_w ca.crt: 1099 bytes
-
For CN2 with Red Hat OpenShift, you use the image to pull secrets as a secret value and then run the following command to get the
bearerToken
.-
Get the
bearerToken
.kubectl get secret cn2pipelines-dockercfg-445hx -n cn2pipelines -o yaml
Output:
apiVersion: v1 data: .dockercfg: e30= kind: Secret metadata: annotations: kubernetes.io/service-account.name: cn2pipelines kubernetes.io/service-account.uid: 38b98d44-334e-4fce-ba90-afe6eae1f644 openshift.io/token-secret.name: cn2pipelines-token-n5qwb openshift.io/token-secret.value: eyJhbGciOiJSUzI1NiIsImtpZCI6ImZ5aVVpQURJUzU1YThqV3ZUME43UGxiX1JhR0hoYnhZd25GMkpBX2g3UzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaS1qZW5raW5zIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpLWplbmtpbnMtdG9rZW4tbjVxd2IiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ktamVua2lucyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM4Yjk4ZDQ0LTMzNGUtNGZjZS1iYTkwLWFmZTZlYWUxZjY0NCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjaS1qZW5raW5zOmNpLWplbmtpbnMifQ.Zj5Bs8Y8h4GL0o-p7rhnJcPeYdbcoVpfM0oRMHky3KUQuTAb5ZjwV3o-h0e-hZlQC_TpI4kNotijEoFiwKU_mYPr9bY36EBngUZp41BiqSwiY_5qG_wYDd6Dg_Xh6C5n4eagBN8OAi9IXlM3SYH9hgGmEx-dqoXhlGdxCht_JPWoDXbKq0eFU_mtUlqjMU0p__g1VoQ1svlRCUvRsfI8OxIM5jd7qPC3NqkpHlK1I5BHQaScWdjihaTo7OpK-zkcgenSjq882Okw4UxsttFgJZ5iF7hHLcMWtVi-pX4SVl2pdHi8H7DxD4YDiZD3xUJapRpRvnHqNsDvoXXonWBOskW4JE86t95Z5Z7lIHNPpvftajxc3qky6hBW0-1yfpgK36Df2g3OGGrVm16S31wl1K6y7oUu6Py5B4BM5qcge7J9wNTNRMezuomt38SyqyuaCt8SBL-dtc_8bAhKLnMZ7Vr_kGHCDSGBeO_7BaH9dqhb85-oyC_mKHA8F_5xmC4wZ7bBDhMRN9lAoKePK6p1toz1Ca395_w83ib5zGxMfD9C-hskYNrkhCPJwS00_s_QXQdXnnzhc0_C2K9KqZk1qE8A2zmlbaxtcP_PrMMhS5H0bs2i8n88kZO74H7AmPk8HRx0oLG5Ue8Oh8F5x9Ua5M4WuZSfmN5jXlSVdCvtqQY creationTimestamp: "2023-03-17T17:32:11Z" name: cn2pipelines-dockercfg-445hx namespace: cn2pipelines ownerReferences: - apiVersion: v1 blockOwnerDeletion: false controller: true kind: Secret name: cn2pipelines-token-n5qwb uid: a02b6e19-db50-4d27-9b29-b33a60ad47c9 resourceVersion: "1132721" uid: 723f8808-2fd5-49e6-8012-9fbab8962b47 type: kubernetes.io/dockercfg
-
-
Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.
-
Create a ClusterRole and name the file
clusterrole-cn2pipelines.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: name: cn2pipelines name: cn2pipelines rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
-
Apply the
clusterrole-cn2pipelines.yaml
file that you just created.kubectl apply -f clusterrole-cn2pipelines.yaml -n cn2pipelines
-
Create a ClusterRoleBinding file and name the file
clusterrolebinding-cn2pipelines.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: name: cn2pipelines name: cn2pipelines roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cn2pipelines subjects: - kind: ServiceAccount name: cn2pipelines namespace: cn2pipelines
-
Apply the
clusterrolebinding-cn2pipelines.yaml
file that you just created.kubectl apply -f clusterrolebinding-cn2pipelines.yaml -n cn2pipelines
You now have full permissions to all resources in cluster.
-
For Kubernetes Version 1.24 or Later
For Kubernetes version 1.24 and later, creating a service account does not create a secret automatically. Use the following procedure to manually create a token for that service account.
-
Create the namespace if one does not already exist.
kubectl create ns cn2pipelines
-
Create a service account named
cn2pipelines
.kubectl create sa cn2pipelines -n cn2pipelines
-
Create a token for the service account
cn2pipelines
.kubectl create token cn2pipelines -n cn2pipelines --duration=999999h
-
Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.
-
Create a ClusterRole and name the file
clusterrole-cn2pipelines.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: name: cn2pipelines name: cn2pipelines rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
-
Apply the
clusterrole-cn2pipelines.yaml
file that you just created.kubectl apply -f clusterrole-cn2pipelines.yaml -n cn2pipelines
-
Create a ClusterRoleBinding file and name the file
clusterrolebinding-cn2pipelines.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: name: cn2pipelines name: cn2pipelines roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cn2pipelines subjects: - kind: ServiceAccount name: cn2pipelines namespace: cn2pipelines
-
Apply the
clusterrolebinding-cn2pipelines.yaml
file that you just created.kubectl apply -f clusterrolebinding-cn2pipelines.yaml -n cn2pipelines
Your token is created and you now have full permissions to all resources in cluster.
-
-
Run the describe command to fetch the token.
kubectl describe sa cn2pipelines -n cn2pipelines
Output:
Name: cn2pipelines Namespace: cn2pipelines Labels: <none> Annotations: <none> Image pull secrets: <none> Mountable secrets: cn2pipelines-token-5szb6 Tokens: cn2pipelines-token-5szb6 Events: <none>
-
Retrieve the
Mountable secrets
from the Step 5 output. Run the describe secret command to get thebearerToken
for the service account.kubectl describe secret cn2pipelines-token-5szb6 -n cn2pipelines
Output:
Name: cn2pipelines-token-5szb6 Namespace: cn2pipelines Labels: <none> Annotations: kubernetes.io/service-account.name: cn2pipelines kubernetes.io/service-account.uid: e5059023-7269-482a-870c-5c4ff175ba00 Type: kubernetes.io/service-account-token Data ==== namespace: 10 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZQMkxOcWlOQjg5MElySUtiWGpPTWJVVGZNR3FQS3hnUDhyTDFHZjd3VFkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaS1qZW5raW5zIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpLWplbmtpbnMtdG9rZW4tNXN6YjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ktamVua2lucyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImU1MDU5MDIzLTcyNjktNDgyYS04NzBjLTVjNGZmMTc1YmEwMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjaS1qZW5raW5zOmNpLWplbmtpbnMifQ.DeAySlkf7dW6xzUH5bLeXc2lRPa_RMZ2bG4zGktpHyA2eDdM-nliCTpwhuBPbZ2fNeiaZb3Tl8h-MJNF7IygwXEHjW8ALfvUv4nBnmSMj9JW44PoPeMSCAnrtIXucy8hcGZN4K6i1w2n6ASSYAXyifwMOLy3-KfbY9PYErOb0eC34-cHkP-TQoV0o4ncA58kwOwut2DmkIKfH3gsOAY445wO4_WUeYuqO_JU0uQpyPaCRO9sLDhMlVcnp0TI7hvZu_DbVyRhy4b8QqJEj3h08j0lPGvFhvmCcUqTSLXbVtV9o62cqhd1q9pcFq5yAxmYpuwWjkOP8KuIsf71U070_w ca.crt: 1099 bytes
Apply Ingress and Update /etc/hosts for OpenShift Deployment Type
Perform these steps on the CN2 cluster.
-
Check that /etc/hosts contains entries from the OpenShift cluster. For example:
192.167.19.571 api.ocp-ss-571.net
-
Run the following command to locate the files to apply.
cd files/public/openshift
-
Apply the following YAML files.
kubectl apply -f clusterRoleBindings.yml kubectl apply -f haproxy-ingress-controller.yaml
-
Wait for all the pods to come up, then apply these two YAMLs.
kubectl apply -f contour.yaml kubectl apply -f nginx.yaml
The /etc/hosts for OpenShift is updated.
Install Helm
Before installing the CN2 pipeline chart, you need to install Helm 3 in the management cluster. Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Run the following command to download and install the latest version of Helm 3.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Install CN2 Pipeline Helm Chart
The CN2 pipeline Helm chart is used to install and configure the CN2 pipeline management cluster.
To install the CN2 pipeline Helm chart on your management cluster:
Verify CN2 Pipeline Helm Chart Installation
To verify the CN2 Pipeline Helm Chart Installation, run the following commands:
Argo CD and Helm Configuration
This topic lists the Argo components and configurations that are automated as part of the CN2 pipeline Helm chart install.
-
Argo CD External Service—Creates a Kubernetes service with service type as NodePort or LoadBalancer. This creates the Argo CD external service, providing access to the Argo CD API server and the Argo CD GUI.
-
Register Git Repository with CN2 Configurations—Configures repository credentials and connects your Git repository to Argo CD. Argo CD is configured to your Git repository in order to watch and pull the configuration changes from your Git repository. This Git repository should only contain Kubernetes resources. Argo CD does not understand any other type of YAML or files.
-
Register Kubernetes Clusters —Registers a Kubernetes cluster to Argo CD. This process configures Argo CD to provision the Kubernetes resources in any Kubernetes cluster. Multiple Kubernetes clusters can be configured in Argo CD.
-
Create an Argo CD Application—Creates an application using the Argo CD GUI. Any application created in Argo CD needs to be associated with a Git repository and one Kubernetes cluster.
Argo Log In
After installing the CN2 pipeline Helm chart, you have access to the Argo Workflow GUI and the Argo CD GUI.
Access Argo Workflow UI
To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30550.
-
Access the Argo CD GUI from your browser.
https://<management-server-ip-address>:30550
-
On the management node, run the following command to receive the token.
kubectl -n argo exec $(kubectl get pod -n argo -l 'app=argo-server' -o jsonpath='{.items[0].metadata.name}') -- argo auth token
Access Argo CD GUI
To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30551.
-
Access the Argo CD GUI from your browser.
https://<management-server-ip-address>:30550
-
On the management node, run the following command to receive the token. The username is admin.
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
CN2 and Workflows
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. You can define workflows where each step in the workflow is a container. And you can model multi-step workflows as a sequence of tasks or capture dependencies between tasks using a directed acyclic graph (DAG).
Why Workflows Are Needed
Workflows are used to invoke and run CN2 test cases after provisioning CN2 resources by using the GitOps engine. These workflows qualify the CN2 application configurations and generates test results for the configuration that is being deployed.
How Workflows Work and How CN2 Uses Workflows
Workflows are triggered whenever a CN2 resource is provisioned by the GitOps engine. Each of the CN2 resources or a group of CN2 resources are mapped to a specific workflow test DAG. After successful completion of these test suites, the CN2 configurations are qualified to be promoted to production environments from the Staging or Test environments.
CN2 Pipeline Service
The pipeline service listens for notifications from Argo Events for any changes in Kubernetes resources. The pipeline service exposes a service which is used by Argo Events to consume and trigger the service with the data related to the CN2 configuration that you applied. It is the responsibility of the CN2 pipeline service to identify the test workflow to be triggered for the type of CN2 configuration that you applied. Workflows change dynamically depending on the objects being notified. The CN2 pipeline listener service invokes the respective workflow dependent on the CN2 configuration that gets applied.
CN2 Pipeline Configurations
This topic shows examples for the CN2 pipeline configurations.
- Pipeline Configuration
- Test Workflow Template Parameter Configuration
- Workflow to Kind Map
- Kubeconfig Secret for the CN2 Cluster
Pipeline Configuration
The pipeline configuration is used by the pipeline engine and includes:
-
Pipeline commit threshold
-
Config map:
cn2pipeline-configs
-
Namespace:
argo-events
Example pipeline configuration:
apiVersion: v1 data: testcase_trigger_threshold: "10" kind: ConfigMap labels: app.kubernetes.io/managed-by: Helm name: cn2pipeline-configs namespace: argo-events
Test Workflow Template Parameter Configuration
All workflow template inputs are stored as configuration maps. These configuration maps are dynamically selected during the execution by the pipeline service.
Workflow to Kind Map
This mapping configuration contains the workflow template to CN2 resource
kind
mapping. Only one template is selected for execution
and the first map that matches has the higher priority. An asterisk (*) in
kind: ['*']
has higher priority than any other
kind
matches and overrides every mapping.
A workflow template for a CN2 resource kind mapping template includes:
-
Config map:
cn2tmpl-to-kind-map
-
Namespace:
argo-events
Following is an example configuration for the workflow template to CN2 resource
kind
mapping. Note the asterisk (*) in kind:
['*']
kindmap.
apiVersion: v1 data: kindmap: | - workflow: it-cloud-arch1-sre2 kind: ['*'] - workflow: custom-cnf-sample-test kind: ['namespace'] - workflow: it-cloud-arch1-sre1 kind: ['service'] - workflow: it-cloud-arch2-sre3 kind: ['service'] - workflow: it-cloud-arch2-sre4 kind: ['service'] - workflow: it-cloud-arch2-sre5 kind: ['namespace'] kind: ConfigMap metadata: labels: app.kubernetes.io/managed-by: Helm name: cn2tmpl-to-kind-map namespace: argo-events
Kubeconfig Secret for the CN2 Cluster
A base64 encoded kubeconfig for the CN2 cluster is created as a secret. Kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster.
-
Secret:
cn2-cluster-kubeconfig
-
Namespace:
argo-events
Do not change the cn2-cluster-kubeconfig
name.
Following is an example kubeconfig secret for CN2 cluster:
apiVersion: v1 kind: Secret metadata: name: cn2-cluster-kubeconfig namespace: argo-events data: config: <base64 value of your cn2cluster kubeconfig> type: Opaque
Create Custom Workflows for CN2 Pipeline
You can create custom workflows tests to test your container network functions (CNFs).
In order to create a custom workflow, you can use the example custom test workflow templates provided with the CN2 pipeline files. Every workflow has a set of input parameters, volume mounts, container creation, and so on. To understand the workflow template creation see Argo Workflows.
The following example custom test workflow templates are provided:
-
Input parameters to workflow
-
Mount volumes
-
Create Kubernetes resource using workflow (Template name:
create-cnf-tf and create-cnf-service-tf
) -
Embedded code in workflow (Template name:
test-access-tf
) -
Pull external code and execute within a container (Template name:
test-service-tf
)
In order to automate the inputs to the workflow during the pipeline run, a workflow
parameter configuration map is created which has the inputs for the workflow. The
configuration map needs to have the same name as the workflow template. In the
following example, the template name is custom-cnf-sample-test
and
a configuration map is created automatically with the same name. As a part of the
pipeline run, the pipeline service looks for the configuration map with the template
name, gets the inputs, which are then automatically added to the workflow when the
pipeline triggers the workflow.
Another update that happens in the test case which triggers the custom workflow is to
change the configuration map name to cn2tmpl-to-kind-map
.
- workflow: custom-cnf-sample-test
kind: ['namespace']
The following is an example workflow configuration.
custom-cnf-sample-test
apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate # new type of k8s spec metadata: name: custom-cnf-sample-test # name of the workflow spec namespace: argo-events spec: serviceAccountName: operate-workflow-sa entrypoint: cnf-test-workflow # invoke the workflows template hostNetwork: true arguments: parameters: - name: image # the qf path to a test docker image value: not_provided - name: kubeconfig_secret # eg: kubeconfig-989348 value: not_provided - name: report_dir # eg: /root/SolutionReports value: not_provided volumes: - name: kubeconfig secret: secretName: "{{ `{{workflow.parameters.kubeconfig_secret}}` }}" - name: reportdir hostPath: path: "{{ `{{workflow.parameters.report_dir}}` }}" templates: - name: create-cnf-tf resource: action: apply #successCondition: status.succeeded > 0 #failureCondition: status.failed > 3 manifest: | apiVersion: v1 kind: Pod metadata: name: webapp-cnf namespace: argo-events labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: {{ .Values.global.docker_image_repo }}/nginx:stable ports: - containerPort: 80 name: http-web-svc - name: create-cnf-service-tf resource: action: apply #successCondition: status.succeeded > 0 #failureCondition: status.failed > 3 manifest: | apiVersion: v1 kind: Service metadata: name: webapp-service namespace: argo-events spec: selector: app.kubernetes.io/name: proxy ports: - name: webapp-http protocol: TCP port: 80 targetPort: http-web-svc - name: test-access-tf script: image: "{{ `{{workflow.parameters.image}}` }}" command: [python] source: | import time print('--Test access to CNF--') url = 'webapp-service.argo-events.svc.cluster.local' retry_max = 3 retry_cnt = 0 while retry_cnt < retry_max: print('Response status code: {}','200') time.sleep(1) retry_cnt += 1 print('Monitoring access count: {}',retry_cnt) print('Completed') - name: test-service-tf inputs: artifacts: - name: pyrunner path: /usr/local/src/cn2_py_runner.py mode: 0755 http: url: https://raw.githubusercontent.com/roshpr/argotest/main/cn2-experiments/cn2_py_runner.py script: image: "{{ `{{workflow.parameters.image}}` }}" command: [python] args: ["/usr/local/src/cn2_py_runner.py", "4"] - name: cnf-test-workflow dag: tasks: - name: create-cnf template: create-cnf-tf - name: create-cnf-service template: create-cnf-service-tf - name: test-connectivity template: test-access-tf dependencies: [create-cnf-service] - name: test-load template: test-service-tf dependencies: [create-cnf-service]