Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Upgrade CN2

Use this procedure to upgrade CN2 from a previous release to this release. If you want to upgrade CN2 from this release to some future release, then follow the upgrade procedure in the documentation for that future release.

The Contrail controller consists of Deployments and StatefulSets, which are configured for rolling updates. During the upgrade, the pods in each Deployment and StatefulSet are upgraded one at a time where applicable. The remaining pods in that Deployment or StatefulSet remain operational. This enables Contrail controller upgrades to be hitless.

The Contrail data plane consists of a DaemonSet with a single vRouter pod. During the upgrade procedure, this single pod is taken down and upgraded. Because of this, Contrail data plane upgrades are not hitless. If desired, migrate traffic off of the node being upgraded prior to performing the upgrade.

You upgrade CN2 software by porting the contents of your existing manifests to the manifests for this release, and then applying the manifests for this release. All CN2 manifests must reference the same software release.

Ensure that this CN2 release is compatible with the OCP release that you're currently running. For the list of minimum compatible releases, see

If this CN2 release is not compatible with the OCP release that you're currently running, then you'll need to upgrade OpenShift first. See Upgrade OpenShift.


Before you upgrade, check to make sure that each node has at least one allocatable pod available. The upgrade procedure temporarily allocates an additional pod, which means that your node cannot be running at maximum pod capacity when you perform the upgrade. You can check pod capacity on a node by using the kubectl describe node command.

  1. Download the manifests for this (new) release. To make this procedure easier to follow, we'll place these manifests into the manifests-new directory.
  2. Locate the (old) manifests that you used to create the existing CN2 installation. These could be the manifests you used in step 4.c in Before You Install or these could be manifests that you customized for your setup.
    We'll place the existing (old) manifests into the manifests-old directory.
  3. Port over any changes from the old manifests to the new manifests.
    The new manifests can contain constructs that are specific to this new release. Identify all changes that you've made to the old manifests and copy them over to the new manifests. This includes repository credentials, interface names, network configuration, and other customizations.

    If you have a large number of nodes, use node selectors to group your upgrades to a more manageable number.

  4. Set the Contrail deployer upgrade strategy in the existing cluster to recreate the deployer during the upgrade.
    To do this, edit the deployer in the existing cluster as follows:Look for the upgrade strategy. It might look like this: Replace the strategy as follows:Save and quit the file. The above changes are automatically applied.
  5. Upgrade CN2 by applying all the manifests one at a time in ascending order.

    For convenience, you can use the following bash script. Before you use this script, make sure that:

    • you place all the (new) manifests you want to use into the manifests-new directory, including those manifests from the subdirectories
    • you place all the (old) manifests that you used to create the existing cluster into the manifests-old directory

    • you don't place any other YAML files into the manifests-new or the manifests-old directory

    The script loops through all the *.yaml files in the manifests-new directory in ascending order and performs a kubectl apply or a kubectl replace on each YAML file. Manifests that exist in both the manifests-new and manifests-old directories are 'replaced'. Manifests that only exist in the manifests-new directory are 'applied'.

    The pods in each Deployment and Stateful set will upgrade one at a time. The vRouter DaemonSet will go down and come back up.

  6. Use standard kubectl commands to check on the upgrade.

    Check the status of the nodes.

    Check the status of the pods.

    If some pods remain down, debug the installation as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.