Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Guidance for Migrating to CentOS 7.x for NorthStar 6.0.0 and Later

Introduction

Note:

If you are already using CentOS 7.x (7.6, 7.7, 7.9) or RHEL 7.x (7.6, 7.7, 7.9) or 8.6, you do not need these instructions. Instead, follow the installation procedures in the NorthStar Controller/Planner Getting Started Guide to install or upgrade your NorthStar application.

These instructions are intended to assist you in migrating a working NorthStar 5.1.0 three-node cluster running on CentOS or RHEL 6.10 to a NorthStar 5.1.0 three-node cluster on CentOS 7.x or RHEL 7.x or 8.6. This creates an upgrade path for NorthStar 5.1.0 to NorthStar 6.0.0 or later as CentOS and RHEL 6.x are no longer supported. If you are running a VM or if you have a current backup plan in production, we recommend you take a snapshot or create a backup before proceeding, as the instructions will involve wiping out your HDD/SDD and removing all data on those drives.

Note:

This guidance assumes familiarity with the NorthStar installation and configuration process. If you have never installed/configured NorthStar before, we recommend you read the NorthStar Getting Started Guide for background, and have it available for reference.

You must upgrade the operating system first because NorthStar 6.0.0 or later installation requires CentOS 7.x or RHEL 7.x or 8.6. The order of these procedures is important:

  1. Back up your data.

    The following files should be backed up:

    • /opt/northstar/data/*.json

    • /opt/northstar/data/northstar.cfg*

    • /opt/northstar/data/crpd/juniper.conf*

    • /opt/pcs/db/sys/npatpw

    • Output from the /opt/northstar/utils/cmgd_cli -c "show config" command.

  2. Upgrade the operating system to CentOS 7.x or RHEL 7.x or 8.6.

  3. Install NorthStar 5.1.0 on the upgraded operating system.

  4. When all nodes are running CentOS 7.x or RHEL 7.x or 8.6 and NorthStar 5.1.0, upgrade NorthStar to 6.0.0 or later.

Example Scenario

For example purposes, these instructions assume you are migrating from CentOS 6.10 to CentOS 7.x, and your network configuration includes:

  • Three NorthStar application servers in a cluster

  • Three analytics servers in a cluster

  • Three collector nodes

Your actual operating system version and network topology might be different, but the principles still apply.

We recommend backing up your operating system files and directories so you have a reference since some of the files differ between CentOS 6.x and CentOS 7.x. Back up these operating system files and directories, and save them to an external or network drive:

  1. /etc/selinux/config

  2. /etc/sysconfig/

  3. /etc/hosts

  4. /etc/ntp.conf

  5. /etc/resolv.conf

  6. /etc/ssh/

  7. /root/.ssh/

Back up these NorthStar files and directories, and save them to an external or network drive:

  1. /opt/pcs/db/sys/npatpw

  2. /opt/northstar/data/northstar.cfg

  3. /opt/northstar/data/*.json

  4. /opt/northstar/data/junosvm.conf

  5. /opt/northstar/northstar.env

  6. /opt/northstar/thirdparty/netconfd/templates

  7. /opt/northstar/saved_models (if used for saving NorthStar Planner projects)

The Basic Work Flow

For any node, whether it is a NorthStar application node, an analytics node, or a collector node, the work flow to upgrade your operating system while preserving your clusters and data is essentially the same:

  1. Power down one standby node in the cluster setup.
  2. Boot that node from the operating system minimal ISO.
  3. Install the operating system on the node.
  4. Run yum -y update to address any critical or security updates.
  5. Install recommended packages:
  6. Install the NorthStar 5.1.0 application on this same node, setting it up as a standalone host.
    Note:

    For NorthStar application nodes, you will need a new license because the interface names change from ethx to ensx when you upgrade the operating system. You will not need a new license for analytics or collector nodes.

  7. For NorthStar application nodes, launch the web UI on the host https://northstar_ip_address:8443 to ensure the license is working and you can log in successfully.
  8. You can check the status of the NorthStar processes by running the supervisorctl status command.

In this procedure, we have you start with upgrading the operating system on your analytics cluster, then your NorthStar application cluster, and your collector cluster last. However, this order is not a strict requirement. When all nodes in all clusters are running the upgraded operating system and NorthStar 5.1.0, you then upgrade to NorthStar 6.0.0 or later.

Upgrade the Operating System on Your Analytics Nodes

For analytics nodes, Elasticsearch will self-form the cluster and distribute the data per the replication policy. Therefore, there is no need to first delete the node from Elasticsearch history. To migrate your analytics cluster, use the following procedure:

  1. Install CentOS 7.7, 7.9 or 8.6 on a standby analytics node, including the previously stated recommended packages.
  2. Install NorthStar-Bundle-5.1.0-20191210_220522_bb37a329b_64.x86_64.rpm on the node where you have the freshly installed operating system.
  3. Copy the SSH keys from the existing active node in the analytics cluster and all application nodes to the new analytics node:
  4. Working from an existing node in the cluster, add the new analytics node into the cluster:
    1. From net_setup.py, select Analytics Data Collector Setting (G) for external standalone/cluster analytics server setup.
    2. Select Add new Collector node to existing cluster (E).

      You can use the previous node’s ID and other setup information.

Once this process is completed for the first node, repeat the steps for the remaining analytics cluster nodes. Once the process is complete on all three nodes, your analytics cluster will be up and running with CentOS 7.7 and NorthStar 5.1.0.

The following are useful Elasticsearch (REST API) commands you can use before, during and after upgrading your operating system. Run these from an existing node in the analytics cluster.

  1. curl -X GET "localhost:9200/_cluster/health?pretty"

  2. curl -X GET "localhost:9200/_cat/nodes?v"

  3. curl -X GET "localhost:9200/_cat/indices"

  4. curl -X GET "localhost:9200/_cat/shards"

Use the following command to check that all nodes in your analytics cluster are up:

Upgrade the Operating System on Your NorthStar Application Nodes

Use the following procedure to upgrade your operating system on the NorthStar application nodes:

Note:

You can refer to the NorthStar Getting Started Guide, Replace a Failed Node if Necessary section for reference.

  1. Install CentOS 7.7 on one of the NorthStar application standby nodes (server or VM), including the recommended packages listed previously.
  2. Install the NorthStar 5.1.0 application software (NorthStar-Bundle-5.1.0-20191210_220522_bb37a329b_64.x86_64.rpm). It is important to provide the installation script with the same database password that is on the existing nodes. If necessary, you can reset the database passwords on the existing nodes for consistency before adding the node into the cluster.
    1. Install /opt/pcs/db/sys/nptapw and chown pcs.pcs /opt/pcs/db/sys/npatpw

      Copy your npatpw file to the location /opt/pcs/db/sys/npatpw. Then run the chown pcs:pcs /opt/pcs/db/sys/npatpw command.

    2. Update /opt/northstar/netconfd/templates.
  3. Copy the SSH keys from the existing active node in the NorthStar cluster and all application nodes.
  4. From an existing node in the cluster, delete the knowledge of the CentOS 6.x node from the cluster, then add it back as a new node:
    1. The example below shows identifying the node that needs to be deleted (the one that is down), removing the node from Cassandra, and then observing the output of status commands as the new node is added back into the cluster. UN = up normal, DN = down normal, UJ = up joining. The goal is to replace all nodes and see them return to UN status.
    2. It is important that you resynchronize all your SSH keys once you have rebuilt each node, which includes updating the SSH key on your JunosVM.
    3. After the SSH keys are updated on each JunosVM, back up any changes made to the JunosVM by using the net_setup.py script and selecting Option D > Option 1.
    4. From the net_setup.py main menu, select HA Setup (E).

      Select Add a new node to existing cluster (J), using the existing node data in the script, and allow HA deployment to complete.

    5. Monitor failover to ensure that it completes properly:
      1. Check the output of the supervisorctl status command on the current active node to ensure all processes come up.

      2. Check the cluster status using the following command:

      3. On the node with the VIP (the active node), test failover using the following command:

      4. On the restored node promoting to VIP, use the following command to observe the failover process:

      5. Test the failover process between the three nodes. Optionally, you can add host priority using the net_setup.py script option E (HA Settings).

      6. Run the following command to determine which nodes are currently standby nodes. They should be the two with the higher priority numbers:

      7. Check the NorthStar web UI again for each node while it is the active node, to make sure the data is synchronized properly between the three nodes.

      8. At this point, you should have a fully-functioning NorthStar 5.1.0 three-node cluster running on the CentOS 7.7 operating system.

Upgrade the Operating System on Your Collector Nodes

Collector nodes operate independently, but are tied to the application VIP. They can be deleted or installed back in independently. Proceed one node at a time with reinstallation.

All three collectors are currently running CentOS 6.10 with NorthStar 5.1.0 (NorthStar-Bundle-5.1.0-20191210_220522_bb37a329b_64.x86_64.rpm).

If you have not already done so, back up the NorthStar files and directories listed previously, and save them to an external or network drive.

  1. Install the CentOS 7.7 operating system minimal installation on any one of the collector nodes.
  2. Install the following recommended packages: net-tools, bridge-utils, wget, ntp, telnet, ksh, java-1.8.0-openjdk-headless.
  3. Bring the system back online with the same IP address. Download the NorthStar 5.1.0 package and install it.
  4. Run the collector install script.
  5. Repeat this process on the remaining collector nodes, one at a time.

Special Notes for Nested JunosVM Nodes

The following additional procedure applies to migrating a nested JunosVM setup:

  1. Copy the configuration here: /opt/northstar/data/junosvm/junosvm.conf.

  2. Use the net_setup.py script to assign the JunosVM IP address back to the JunosVM.

  3. Copy your backup of junosvm.conf into /opt/northstar/data/junosvm/junosvm.conf.

  4. Restart the JunosVM:

  5. Observe the JunosVM boot process using this command:

Upgrade all Nodes to NorthStar 6.0.0 or Later

Now that your network and configuration are upgraded to CentOS 7.7, you can proceed with upgrading NorthStar to 6.0.0 or later.

Analytics Node Upgrade to NorthStar 6.0.0 or Later

Upgrade the nodes in the analytics cluster using the following procedure:

  1. Determine which nodes are standby versus active using this command:
  2. Back up any NorthStar files to an external or network directory.
  3. Download the official NorthStar 6.0.0 or later RPM.
  4. Install NorthStar using this command:
  5. Install the analytics application using this command:
  6. Netflowd will be in a FATAL state until the NorthStar application nodes are upgraded and the analytics data collector settings are redeployed as netflowd cannot communicate with cMGD until then. This is an expected error.
  7. Repeat this process on the remaining standby nodes, then do the same on the active node.
  8. Check the Zookeeper status of the analytics cluster:

NorthStar Application Node Upgrade to NorthStar 6.0.0 or Later

Upgrade the NorthStar application nodes using the following procedure:

  1. Back up any NorthStar files on all nodes.
  2. Determine which nodes are standby versus active using this command:
  3. Start the upgrade procedure on standby nodes first.
  4. Download the official NorthStar 6.0.0 or later RPM.
  5. Install NorthStar using these commands:
  6. Once installation is complete, set the cMGD root password. If this is not done, the cMGD-rest service will continually loop. The requirement to set a cMGD-rest password is due to the addition of the cMGD service in NorthStar 6.0.0.
    1. In net_setup.py, select Maintenance & Troubleshooting (D).
    2. Select Change cMGD Root Password (8).
  7. Redeploy the analytics data collector configuration settings so netflowd can communication with cMGD.
    1. In net_setup.py, select Analytics Data Collector Setting (G) for external standalone/cluster analytics server setup.
    2. Select Prepare and Deploy SINGLE Data Collector Setting (A), Prepare and Deploy HA Analytics Data Collector Setting (B), or Prepare and Deploy GEO-HA Analytics Data Collector Setting (C) whichever you had set up before the upgrade.
  8. Upgrading a standby node should not trigger a failover. Failover should only occur when the active node is upgraded. At that time, the active node should fail over to an already upgraded standby node.
  9. After all standby nodes are upgraded, upgrade the active node to NorthStar 6.0.0 or later.
  10. Once all nodes are upgraded and one of the standby nodes has assumed the active role and VIP, monitor the cluster using the following procedure:
    1. Check the status of the NorthStar processes on the current active node using this command:
    2. Check the cluster status using this command:
    3. On the node with the VIP, test the failover using this command:
    4. Use the following command to monitor the progress of the failover on the restored node being promoted to active node (with the VIP):
    5. Optionally, add priority to the nodes using the net_setup.py script, Option E (HA Settings). Test the failover process between the three nodes to ensure the priorities are working properly.
    6. Run the following command to find which nodes are currently standby nodes and ensure that failover is proceeding. The standby nodes should be the two with the higher number priority.
    7. Check the NorthStar web UI again for each node while it is the active node to make sure the data is synchronized properly between the three nodes. Check your nodes, links, LSPs, device profiles, and so on.
    8. At this point you should have a fully functioning 6.0.0 (or later) three-node NorthStar application cluster running on the CentOS 7.7 operating system.

Collector Node Upgrade to NorthStar 6.0.0 or Later

Upgrade your collector nodes using the following procedure.

  1. Backup any NorthStar files to an external or network drive.
  2. Download the official NorthStar RPM.
  3. Install NorthStar.
  4. Install the NorthStar Collector Application.
  5. Repeat this process on all remaining collector nodes. When complete, your collector nodes are running NorthStar 6.0.0 or later on CentOS 7.x.