Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Before You Install CN2 Pipelines

SUMMARY The following procedures will help you obtain some prerequisites and some values used to fill the values.yaml file for the CN2 Pipelines Helm chart.

Install Helm

Before installing the CN2 Pipelines chart, you need to install Helm 3 in the management cluster. Helm helps you manage Kubernetes applications. Helm charts help you define, install, and upgrade even the most complex Kubernetes application.

Run the following command to download and install the latest version of Helm 3:

Create Service Account and Token in Kubernetes

You need a working service account. Creating a service account and token are important to give the Kubernetes cluster access within and outside of the CN2 Pipelines using APIs. This topic describes how to create the service account, token, role, and role bindings.

Note:

Throughout these procedures, cn2pipelines is used as an example.

For Kubernetes Version 1.23 or Earlier

Perform these steps on the CN2 cluster.

To create a service account and token:

  1. Create the namespace if one does not already exist.

  2. Create a service account named cn2pipelines.

  3. Run the describe command to fetch the token.

    Output:

    By default, Kubernetes version 1.23 or earlier creates the token as a secret when you create the service account.

  4. To obtain the secret value and bearerToken for Kubernetes and OpenShift:

    1. For CN2 with Kubernetes, retrieve the Mountable secrets from the Step 3 output. Run the describe secret command to get the bearerToken for the service account. The bearerToken is needed when you update the values.yaml file.

      Output:

    2. For CN2 with Red Hat OpenShift, use the image to pull secrets as a secret value. Run the describe secret command to get the bearerToken for the service account. The bearerToken is needed when you update the values.yaml file.

      Output:

  5. Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.

    1. Create a ClusterRole and name the file clusterrole-cn2pipelines.yaml.

    2. Apply the clusterrole-cn2pipelines.yaml file that you just created.

    3. Create a ClusterRoleBinding file and name the file clusterrolebinding-cn2pipelines.yaml.

    4. Apply the clusterrolebinding-cn2pipelines.yaml file that you just created.

    The service account is now created with permissions.

For Kubernetes Version 1.24 or Later

For Kubernetes version 1.24 and later, creating a service account does not create a secret automatically.

To manually create a token for this service account:

  1. Create the namespace if one does not already exist.

  2. Create a service account named cn2pipelines.

  3. Create a token for the service account cn2pipelines.

  4. Create a ClusterRole and ClusterRoleBinding to give the service account appropriate permissions.

    1. Create a ClusterRole and name the file clusterrole-cn2pipelines.yaml.

    2. Apply the clusterrole-cn2pipelines.yaml file that you just created.

    3. Create a ClusterRoleBinding file and name the file clusterrolebinding-cn2pipelines.yaml.

    4. Apply the clusterrolebinding-cn2pipelines.yaml file that you just created.

    The service account is now created with permissions.

  5. Run the describe command to fetch the token.

    Output:

  6. Retrieve the Mountable secrets output in Step 5. Run the describe secret command to get the bearerToken for the service account.

    Output:

Verify Kubeconfig

Before creating the kubeconfig file as base64, verify kubeconfig works from the management cluster.

  1. Copy the kubeconfig file from CN2 to the management cluster. You can do this with a copy and paste.
  2. Run the following command to view the nodes on the CN2 cluster:
  3. Run the following command to view the all the pods on the CN2 cluster:

Create CN2 Cluster Kubeconfig as Base64

You will need the kubeconfig as a base64 format file for the values.yaml.

  1. Run the following command to create the kubeconfig as base 64:
  2. Update the kubeconfig_secret.yaml with the generated base64 value.

    You can locate the yaml in the following file path of your downloaded CN2 Pipelines files:

    charts/workflow-objects/templates/kubeconfig_secret.yaml

    Example kubeconfig_secret.yaml

    Note:

    Do not change the cn2-cluster-kubeconfig name.

Create a Personal Access Token for GitLab

To create a personal access token, use the following procedure from GitLab:

  1. Select Edit profile.
  2. In the left pane, select Access Tokens.
  3. Enter a name and (optional) expiration date for the token.
    Default expiration is 30 days.
  4. Select the desired scopes.
  5. Select Create personal access token.
  6. Save the personal access token somewhere safe. After you leave the page, you no longer have access to the token.
    For more information, see GitLab Personal Access Token.

Mountpath and Profiles

You need to put the mountpath in a mountpath folder, then create your profiles in the mountpath folder. For example, if your mountpath is /opt/cn2_workflows as defined in the values.yaml, you will create a folder named /opt/cn2_workflows.

Create a Sample ConfigMap in Git Server Folder

You need to create a sample ConfigMap before installing the CN2 Pipelines. This ConfigMap will get applied by Argo CD as part of the CN2 Pipelines installation.

  1. Run the following command to create a ConfigMap with the filename cn2configmap:

    Output:

  2. Commit the ConfigMap file to the CN2 configuration folder identified in your Git server branch.

Apply Ingress and Update /etc/hosts for an OpenShift Deployment

Perform these steps on the CN2 cluster to deploy the ingress components, as mentioned in the Additional Prerequisites Only for OpenShift.

  1. Check that /etc/hosts contains entries from the OpenShift cluster. For example:
  2. Run the following command to locate the files you want to apply.
  3. Apply the following YAML files.
  4. Wait for all the pods to come up, then apply these two YAMLs.

    The /etc/hosts for OpenShift is updated.