Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Upgrade Contrail Service Orchestration from Release 4.1.2 to Release 5.1.2

 
Summary

Follow this procedure to upgrade from CSO Release 4.1.2 to CSO Release 5.1.2.

The upgrade procedure only supports upgrading CSO Release 4.1.2 medium deployment to CSO Release 5.1.2 HA deployment.

You will require 3 new servers to install CSO 5.1.2 HA solution. For details, refer to Hardware and Software Requirements for Contrail Service Orchestration.



Impact of the CSO Upgrade

Table Table 1 describes the impact of the CSO upgrade from Release 4.1.2 to 5.1.2.

Table 1: Impact of the CSO upgrade from Release 4.1.2 to 5.1.2.

Site-to-site tunnels support before the site upgrade

Site-to-site tunnels support after the site upgrade

Old Site WAN IP

New Site WAN IP

Site-to-site Tunnels Support

Comments

Old Site WAN IP

New Site WAN IP

Site-to-site Tunnels Support

Comments

Public

Public

Yes

Old sites can establish site-to-site tunnels with the new sites with public IPs.

Public

Public

Yes

Old sites can establish site-to-site tunnels with the new sites with public IPs.

Public

Private IP(full-cone/restricted NAT)

No

You need to create interfaces on the older sites for destination NAT to connect to the sites with private IP addresses.

Public

Private IP(full-cone/restricted NAT)

Yes

Site-to-site tunnels are established after the site upgrade.

Public

Private IP(symmetric NAT)

No

Symmetric NAT interfaces are not supported.

Public

Private IP(symmetric NAT)

No

Symmetric NAT interfaces are not supported.

Table 2: Impact on sites and tenants post CSO upgrade from Release 4.1.2 to 5.1.2

Scenario

Tenant Public Pool

LANs with Public IPs

Site NAT Pool on WAN

PE Multi-homing

Shared Bearer WAN Links

CSO 4.1.2 tenants and sites on-boarded with CSO 4.1.2

Not supported

Not supported

Not supported

Not supported

Not supported

CSO 4.1.2 tenants for sites on-boarded post upgrade to CSO 5.1.2

Not supported

Not supported

Supported

Supported

Supported

New tenants created post upgrade to CSO 5.1.2

Supported

Supported

Supported

Supported

Supported

Back up Contrail Service Orchestration 4.1.2 Databases

  1. Download the CSO Release 5.1.2 tar file from the CSO Downloads page to the CSO 4.1.2 installervm.
  2. Extract the upgrade512_FRS.tgz file from the tar file to /deployments/central/file_root/ and save it as upgrade51 and run the below salt command.
    salt '*' state.apply upgrade51 saltenv=central

    python /usr/local/bin/setup_cso51_migration.py

    Select 0 to install the patch script.

  3. Install nfs-client.

    salt '*' state.apply upgrade51.install_nfs_client saltenv=central > nfs_client_status

  4. Synchronize the data between nodes.

    cso_backupnrestore -b nodetool_repair

  5. Backup CSO Release 4.1.2. data using cso_backupnrestore command.

    cso_backupnrestore -b backup -s backup412

The cso_backupnrestore script included backing up of the following components—

  • Cassandra

  • Elasticsearch

  • ArangoDB

  • MariaDB

  • Etcd

  • Zookeeper

  • Icinga

  • Swift

  • CAN

  • HAProxy certificates

  • CSO 4.1.2 installation configs

Upgrade Contrail Service Orchestration

Before you begin

You must shutdown centrallbvm1, centrallbvm2, centrallbvm3, sblb1, sblb2, VRR1, and VRR2 VMs in CSO 4.1.2 before starting with CSO 5.1.2 upgrade. This is required to replicate these IPs in CSO 5.1.2 setup.

You will re-use the 4 public IPs from CSO 4.1.2 for CSO 5.1.2 deployment.

The 4 public IPs are—

  • CSO 4.1.2 Central VIP (HAPROXY)

  • SBLB VIP

  • VRR1

  • VRR2

The devices in CSO 5.1.2. will use the same SBLB certificate used in CSO 4.1.2.

Note

See Minimum Requirements for Servers and VMs for details on the VMs and associated resources required for CSO 5.1.2 servers.

Make sure you have the required NAT rules in place.

Sample SRX config

Upgrading CSO 4.1.2 to CSO 5.1.2

  1. You will re-use the 4 public IPs from CSO 4.1.2 for CSO 5.1.2 deployment.
  2. Copy the backup directory of CSO 4.1.2 to CSO 5.1.2.
  3. Provision the VMs in the new servers of CSO 5.1.2. For details, refer to Provision VMs on Contrail Service Orchestration Servers.
  4. During the provisioning, select yes for upgrade.
  5. Provide the CSO 4.1.2 settings.yaml complete backup path to be restored.

    For example—/root/backup412/config_backups/.config/settings.yaml

    Make sure CSO 5.1 and CSO 4.1 infra password are same.

  6. Run ./get_vm_details.sh to identify the IP address of the startupserver_1 VM.

    ./get_vm_details.sh

  7. Copy the backup directory of CSO 4.1.2 to CSO 5.1.2 startupserver_1 VM.
  8. Run cso_backupnrestore script from CSO 5.1.2 startupserver_1 VM.

    cso_backupnrestore -b backup -s <backupname>

    For example—cso_backupnrestore -b backup -s backup512

    The command will create a folder by the name of backupname under /backup directory on the startupserver_1 VM.

  9. Run the following command on CSO 5.1.2 startupserver_1 VM.

    salt '*' state.apply upgrademigration41 saltenv=central

  10. Run the pre_restore_task script.
    python /usr/local/bin/pre_restore_task.py
  11. Restore the data by using cso_backupnrestore script.

    Note the 5.1 backup path from step 10.

    backuppath is 5.1 backup path from above for ex./backups/backup512/2020-01-09T18:47:08/#cso_backupnrestore -b restore -s backuppath -t '*' -c ‘mariadb'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘zookeeper'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘elasticsearch'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘arangodb'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘icinga'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘cassandra'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘swift'

    #cso_backupnrestore -b restore -s backuppath -t '*' -c ‘mariadb'

    If the restore procedure fails for any of the above components, you must retry to restore only those components.

  12. Synchronize the data between nodes.

    cso_backupnrestore -b nodetool_repair

  13. Copy the certificate from CSO 4.1.2 backup folder to SBLB HA Proxy.

    salt-cp -G "roles:haproxy_confd_sblb"

    /root/backups/config_backups/haproxycerts/minions/minions/csp-regional-sblb1.TNL2OQ.

    regional/files/etc/pki/tls/certs/ssl_cert.pem /etc/pki/tls/certs

    salt-cp -G "roles:haproxy_confd_sblb"

    /root/backups/config_backups/haproxycerts/minions/minions/csp-regional-sblb1.TNL2OQ.

    regional/files/etc/pki/tls/certs/ssl_cert.crt /etc/pki/tls/certs

    Restart the SBLB HA Proxy.

    salt -C "G@roles:haproxy_confd_sblb" cmd.run "service haproxy restart"

  14. Copy the certificate from CSO 4.1.2 backup folder to Central HA Proxy.

    salt-cp -G "roles:haproxy_confd"

    /root/backups/config_backups/haproxycerts/minions/minions/csp-central-lbvm1.HBLGHQ.

    central/files/etc/pki/tls/certs/ssl_cert.pem /etc/pki/tls/certs

    salt-cp -G "roles:haproxy_confd"

    /root/backups/config_backups/haproxycerts/minions/minions/csp-central-lbvm1.HBLGHQ.

    central/files/etc/pki/tls/certs/ssl_cert.crt /etc/pki/tls/certs

    Restart the Central HA Proxy.

    salt -C "G@roles:haproxy_confd" cmd.run "service haproxy restart"

  15. Run the following commands on installer VM to update the Nginx certificates.

    kubectl get secret -n central | grep cso-ingress-tls

    kubectl delete secret cso-ingress-tls -n central

    kubectl create secret tls cso-ingress-tls --key

    /root/backups/config_backups/haproxycerts/minions/minions/csp-central-lbvm1.5R8JKN.

    central/files/etc/pki/tls/certs/ssl_cert.key

    --cert /root/backups/config_backups/haproxycerts/minions/minions/csp-central-lbvm1.5R8JKN.

    central/files/etc/pki/tls/certs/ssl_cert.crt -n central

  16. Upgrade to CSO 5.1.2 by running upgrade.sh script..
  17. Restore the SD-WAN and security reports.

    cso_backupnrestore -b restore -s backuppath -t '*' -c 'swift_report' -r 'yes'

  18. Restart all the fmpm-provider-core pods by deleting them.

    root@startupserver1:~# kubectl get pods -n central|grep fmpm-provider-core

    root@startupserver1:~# kubectl delete pod csp.csp-fmpm-provider-core-647ff6598d-4qxxd csp.csp-fmpm-provider-core-647ff6598d-94wdx csp.csp-fmpm-provider-core-647ff6598d-dt6vj csp.csp-fmpm-provider-core-647ff6598d-hbnw2 csp.csp-fmpm-provider-core-647ff6598d-mx8fn csp.csp-fmpm-provider-core-647ff6598d-zd2zt -n central

  19. Restore Contrail Analytics Node (CAN) database.

    ./python.sh upgrade/migration_scripts/common/can_migration.py



    Copy analyticsdb backup from CSO 4.1.2 backup folder to the respective CAN node in CSO 5.1.2.

    The analyticsdb backup files are located at /root/backups/backup411/2020-05-21T00:43:50/central/can/can<x>

    ssh root@<new-can-ip-[123]>

    docker cp 0000/mc-* analyticsdb:/root

    docker exec -it analyticsdb bash

    mv /root/mc-* /var/lib/cassandra/data/ContrailAnalyticsCql/statstablebystrtagv3-c5e9b4c056f711ea8a948909f467ce30 #The path may be different based on uuid #The path may be different based on uuids

    cd /var/lib/cassandra/data/ContrailAnalyticsCql/statstablebystrtagv3-c5e9b4c056f711ea8a948909f467ce30

    chown -R cassandra:cassandra *

    nodetool refresh -- ContrailAnalyticsCql statstablebystrtagv3

After a successful upgrade, CSO is functional and you can log in to the Administrator Portal and the Customer Portal.