Installing the NorthStar Controller
You can use the procedures described in the following sections if you are performing a fresh install of NorthStar Controller or upgrading from an earlier release, unless you are using NorthStar analytics and are upgrading from a release older than NorthStar 4.3. Steps that are not required if upgrading are noted. Before performing a fresh install of NorthStar, you must first use the ./uninstall_all.sh script to uninstall any older versions of NorthStar on the device. See Uninstalling the NorthStar Controller Application.
If you are upgrading from a release earlier than NorthStar 4.3 and you are using NorthStar analytics, you must upgrade NorthStar manually using the procedure described in Upgrading from Pre-4.3 NorthStar with Analytics.
If you are upgrading NorthStar from a release earlier than NorthStar 6.0.0, you must redeploy the analytics settings after you upgrade the NorthStar application nodes. This is done from the Analytics Data Collector Configuration Settings menu described in Installing Data Collectors for Analytics. This is to ensure that netflowd can communicate with cMGD (necessary for the NorthStar CLI available starting in NorthStar 6.1.0).
We also recommend that you uninstall any pre-existing older versions of Docker before you install NorthStar. Installing NorthStar will install a current version of Docker.
The NorthStar software and data are installed in the /opt directory. Be sure to allocate sufficient disk space. See NorthStar Controller System Requirements for our memory recommendations.
When upgrading NorthStar Controller, ensure that the /tmp directory has enough free space to save the contents of the /opt/pcs/data directory because the /opt/pcs/data directory contents are backed up to /tmp during the upgrade process.
If you are installing NorthStar for a high availability (HA) cluster, ensure that:
You configure each server individually using these instructions before proceeding to HA setup.
The database and rabbitmq passwords are the same for all servers that will be in the cluster.
All server time is synchronized by NTP using the following procedure:
- Install NTP.
yum -y install ntp
- Specify the preferred NTP server in
ntp.conf
. - Verify the configuration.
ntpq -p
Note All cluster nodes must have the same time zone and system time settings. This is important to prevent inconsistencies in the database storage of SNMP and LDP task collection delta values.
- Install NTP.
To upgrade NorthStar Controller in an HA cluster environment, see Upgrade the NorthStar Controller Software in an HA Environment.
For HA setup after all the servers that will be in the cluster have been configured, see Configuring a NorthStar Cluster for High Availability.
To set up a remote server for NorthStar Planner, see Using a Remote Server for NorthStar Planner.
The high-level order of tasks is shown in Figure 1. Installing and configuring NorthStar comes first. If you want a NorthStar HA cluster, you would set that up next. Finally, if you want to use a remote server for NorthStar Planner, you would install and configure that. The text in italics indicates the topics in the NorthStar Getting Started Guide that cover the steps.

The following sections describe the download, installation, and initial configuration of NorthStar.
The NorthStar software includes a number of third-party packages. To avoid possible conflict, we recommend that you only install these packages as part of the NorthStar Controller RPM bundle installation rather than installing them manually.
Activate Your NorthStar Software
To obtain your serial number certificate and license key, see Obtain Your License Keys and Software for the NorthStar Controller.
Download the Software
The NorthStar Controller software download page is available at https://www.juniper.net/support/downloads/?p=northstar#sw.
- From the Version drop-down list, select the version number.
- Click the NorthStar Application (which includes the RPM bundle and the Ansible playbook) and the NorthStar JunosVM to download them.
If Upgrading, Back Up Your JunosVM Configuration and iptables
If you are doing an upgrade from a previous NorthStar release, and you previously installed NorthStar and Junos VM together, back up your JunosVM configuration before installing the new software. Restoration of the JunosVM configuration is performed automatically after the upgrade is complete as long as you use the net_setup.py utility to save your backup.
- Launch the net_setup.py script:
[root@hostname~]# /opt/northstar/utils/net_setup.py
- Type D and press Enter to select Maintenance and Troubleshooting.
- Type 1 and press Enter to select Backup JunosVM Configuration.
- Confirm the backup JunosVM configuration is stored at
'/opt/northstar/data/junosvm/junosvm.conf'
. - Save the iptables.
iptables-save > /opt/northstar/data/iptables.conf
If Upgrading from an Earlier Service Pack Installation
You cannot upgrade to NorthStar Release 6.2.1 from an earlier
service pack installation; for example, you cannot upgrade to NorthStar
Release 6.2.1 from a NorthStar 6.2.0 SP1 or 6.1.0 SP5 installation.
So, to upgrade to NorthStar Release 6.2.1 from an earlier service
pack installation, you must rollback the service pack or run the upgrade_NS_with_patches.sh
script to allow installation
of a newer version over the service pack.
To upgrade to NorthStar Release 6.2.1, before proceeding with the installation:
- Navigate to the service pack deployment directory. For
example:
[root@host]# cd NorthStar_6.2.0-Patch-All-20210715
- Do one of the following:
Rollback the service pack by running the
batch-uninstall.sh
script.[root@host]# ./batch-uninstall.sh
Upgrade the installation by executing
upgrade_NS_with_patches.sh.
[root@host]# ./upgrade_NS_with_patches.sh
The
upgrade_NS_with_patches.sh
script removes the entries from the package database so that the NorthStar Release 6.2.1 packages can be installed without any dependency conflict.
Install NorthStar Controller
You can either install the RPM bundle on a physical server or use a two-VM installation method in an OpenStack environment, in which the JunosVM is not bundled with the NorthStar Controller software.
The following optional parameters are available for use with the install.sh command:
The default bridges are external0 and mgmt0. If you have two interfaces such as eth0 and eth1 in the physical setup, you must configure the bridges to those interfaces. However, you can also define any bridge names relevant to your deployment.
We recommend that you configure the bridges before running install.sh.
Bridges are not used with cRPD installations.
For a physical server installation, execute the following commands to install NorthStar Controller:
[root@hostname~]# yum install <rpm-filename>
[root@hostname~]# cd /opt/northstar/northstar_bundle_x.x.x/
[root@hostname~]# ./install.sh
Note yum install works for both upgrade and fresh installation.
For a two-VM installation, execute the following commands to install NorthStar Controller:
[root@hostname~]# yum install <rpm-filename>
[root@hostname~]# cd /opt/northstar/northstar_bundle_x.x.x/
[root@hostname~]# ./install-vm.sh
Note yum install works for both upgrade and fresh installation.
The script offers the opportunity to change the JunosVM IP address from the system default of 172.16.16.2.
Checking current disk space
INFO: Current available disk space for /opt/northstar is 34G. Will proceed with installation.
System currently using 172.16.16.2 as NTAD/junosvm ip
Do you wish to change NTAD/junosvm ip (Y/N)? y
Please specify junosvm ip:
For a cRPD installation, you must have:
CentOS or Red Hat Enterprise Linux 7.x. Earlier versions are not supported.
A Junos cRPD license.
The license is installed during NorthStar installation. Verify that the cRPD license is installed by running the
show system license
command in the cRPD container.
Note If you require multiple BGP-LS peering on different subnets for different AS domains at the same time, you should choose the default JunosVM approach. This configuration for cRPD is not supported.
For a cRPD installation, execute the following commands to install NorthStar Controller:
[root@hostname~]# yum install <rpm-filename>
[root@hostname~]# cd /opt/northstar/northstar_bundle_x.x.x/
[root@hostname~]# ./install.sh ––crpd
Note yum install works for both upgrade and fresh installation.
Configure Support for Different JunosVM Versions
This procedure is not applicable to cRPD installations.
If you are using a two-VM installation, in which the JunosVM is not bundled with the NorthStar Controller, you might need to edit the northstar.cfg file to make the NorthStar Controller compatible with the external VM by changing the version of NTAD used. For a NorthStar cluster configuration, you must change the NTAD version in the northstar.cfg file for every node in the cluster. NTAD is a 32-bit process which requires that the JunosVM device running NTAD be configured accordingly. You can copy the default JunosVM configuration from what is provided with the NorthStar release (for use in a nested installation). You must at least ensure that the force-32-bit flag is set:
[northstar@jvm1]#set system processes routing force-32-bit
To change the NTAD version in the northstar.cfg file:
- SSH to the NorthStar application server.
- Using a text editor such as vi, edit the ntad_version
statement in the
opt/northstar/data/northstar.cfg
file to the appropriate NTAD version according to Table 1:[root@ns]# vi /opt/northstar/data/northstar.cfg ... # NTAD versions(1=No SR; 2=SR, no local addr; 3=V2+local addr 18.2; *4=V3+BGP peer SID 18.3R2, 18.4R2; 5=V4+OSPF SR 19.1+) ntad_version=version-number
Table 1: NTAD Versions by Junos OS Release
NTAD Version
Junos OS Release
Change
1
Earlier than Release 17.2
Initial version
2
17.2
Segment routing
3
18.2
NTAD version 2 + local address
“Local address” refers to multiple secondary IP addresses on interfaces. This is especially relevant in certain use cases such as loopback interface for VPN-LSP binding.
4
18.3R2, 18.4R2
NTAD version 3 + BGP peer SID
5
19.1 and later
NTAD version 4 + OSPF SR
- Manually restart the toposerver process:
[root@ns]# supervisorctl restart northstar:toposerver
- Log into the Junos VM and restart NTAD:
[northstar@jvm1]#restart network-topology-export
- Set up the SSH key for the external VM by selecting option H from the Setup Main Menu when you run the net_setup.py script, and entering the requested information.
Create Passwords
This step is not required if you are doing an upgrade rather than a fresh installation.
When prompted, enter new database/rabbitmq, web UI Admin, and cMGD root passwords.
- Create an initial database/rabbitmq password by typing
the password at the following prompts:
Please enter new DB and MQ password (at least one digit, one lowercase, one uppercase and no space):
Please confirm new DB and MQ password:
- Create an initial Admin password for the web UI by typing
the password at the following prompts:
Please enter new UI Admin password:
Please confirm new UI Admin password:
- Create a cMGD root password (for access to the NorthStar
CLI) by typing the password at the following prompts:
Please enter new cMGD root password:
Please confirm new cMGD root password:
Enable the NorthStar License
This step is not required if you are doing an upgrade rather than a fresh installation.
You must enable the NorthStar license as follows, unless you are performing an upgrade and you have an activated license.
- Copy or move the license file.
[root@northstar]# cp /path-to-license-file/npatpw /opt/pcs/db/sys/npatpw
- Set the license file owner to the PCS user.
[root@northstar]# chown pcs:pcs /opt/pcs/db/sys/npatpw
- Wait a few minutes and then check the status of the NorthStar
Controller processes until they are all up and running.
[root@northstar]# supervisorctl status
Adjust Firewall Policies
The iptables default rules could interfere with NorthStar-related traffic. If necessary, adjust the firewall policies.
Refer to NorthStar Controller System Requirements for a list of ports that must be allowed by iptables and firewalls.
Launch the Net Setup Utility
This step is not required if you are doing an upgrade rather than a fresh installation.
For installations that include a remote Planner server, the Net Setup utility is not used. Instead, the install-remote_planner.sh installation script launches a different setup utility, called setup_remote_planner.py. Skip to section blah to proceed.
Launch the Net Setup utility to perform host server configuration.
[root@northstar]# /opt/northstar/utils/net_setup.py
The main menu that appears is slightly different depending on whether your installation uses Junos VM or is a cRPD installation.
For Junos VM installations (installation on a physical server or a two-server installation), the main menu looks like this:
Main Menu: ............................................. A.) Host Setting ............................................. B.) JunosVM Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. H.) Setup SSH Key for external JunosVM setup ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute.
For cRPD installations, the main menu looks like this:
Main Menu: ............................................. A.) Host Setting ............................................. B.) Junos CRPD Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute.
Notice that option B is specific to cRPD and option H is not available as it is not relevant to cRPD.
Configure the Host Server
This step is not required if you are doing an upgrade rather than a fresh installation.
- From the NorthStar Controller setup Main Menu, type A and press Enter to display
the Host Configuration menu:
Host Configuration: ******************************************************** In order to commit your changes you must select option Z ******************************************************** ............................................. 1. ) Hostname : northstar 2. ) Host default gateway : 3A.) Host Interface #1 (external_interface) Name : external0 IPv4 : Netmask : Type (network/management) : network 3B.) Delete Host Interface #1 (external_interface) data 4A.) Host Interface #2 (mgmt_interface) Name : mgmt0 IPv4 : Netmask : Type (network/management) : management 4B.) Delete Host Interface #2 (mgmt_interface) data 5A.) Host Interface #3 Name : IPv4 : Netmask : Type (network/management) : network 5B.) Delete Host Interface #3 data 6A.) Host Interface #4 Name : IPv4 : Netmask : Type (network/management) : network 6B.) Delete Host Interface #4 data 7A.) Host Interface #5 Name : IPv4 : Netmask : Type (network/management) : network 7B.) Delete Host Interface #5 data 8. ) Show Host current static route 9. ) Show Host candidate static route A. ) Add Host candidate static route B. ) Remove Host candidate static route ............................................. X. ) Host current setting Y. ) Apply Host static route only Z. ) Apply Host setting and static route ............................................. ............................................. Please select a number to modify. [<CR>=return to main menu]:
To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter.
- Type 1 and press Enter to configure the hostname. The existing hostname
is displayed. Type the new hostname and press Enter.
Please select a number to modify. [<CR>=return to main menu]: 1 current host hostname : northstar new host hostname : node1
- Type 2 and press Enter to configure the host default gateway. The existing
host default gateway IP address (if any) is displayed. Type the new
gateway IP address and press Enter.
Please select a number to modify. [<CR>=return to main menu]: 2 current host default_gateway : new host default_gateway : 10.25.152.1
- Type 3A and press Enter to configure the host interface #1 (external_interface).
The first item of existing host interface #1 information is displayed.
Type each item of new information (interface name, IPv4 address, netmask,
type), and press Enter to proceed to the
next.
Note The designation of network or management for the type of interface is a label only, for your convenience. NorthStar Controller does not use this information.
Please select a number to modify. [<CR>=return to main menu]: 3A current host interface1 name : external0 new host interface1 name : external0 current host interface1 ipv4 : new host interface1 ipv4 : 10.25.153.6 current host interface1 netmask : new host interface1 netmask : 255.255.254.0 current host interface1 type (network/management) : network new host interface1 type (network/management) : network
- Type A and press Enter to add a host candidate static route. The existing
route, if any, is displayed. Type the new route and press Enter.
Please select a number to modify. [<CR>=return to main menu]: A Candidate static route: new static route (format: x.x.x.x/xy via a.b.c.d dev <interface_name>): 10.25.158.0/24 via 10.25.152.2 dev external0
- If you have more than one static route, type A and press Enter again
to add each additional route.
Please select a number to modify. [<CR>=return to main menu]: A Candidate static route: [0] 10.25.158.0/24 via 10.25.152.2 dev external0 new static route (format: x.x.x.x/xy via a.b.c.d dev <interface_name>): 10.25.159.0/24 via 10.25.152.2 dev external0
- Type Z and press Enter to save your changes to the host configuration.
Note If the host has been configured using the CLI, the Z option is not required.
The following example shows saving the host configuration.
Host Configuration: ******************************************************** In order to commit your changes you must select option Z ******************************************************** ............................................. 1. ) Hostname : node1 2. ) Host default gateway : 10.25.152.1 3A.) Host Interface #1 (external_interface) Name : external0 IPv4 : 10.25.153.6 Netmask : 255.255.254.0 Type (network/management) : network 3B.) Delete Host Interface #1 (external_interface) data 4A.) Host Interface #2 (mgmt_interface) Name : mgmt0 IPv4 : Netmask : Type (network/management) : management 4B.) Delete Host Interface #2 (mgmt_interface) data 5A.) Host Interface #3 Name : IPv4 : Netmask : Type (network/management) : network 5B.) Delete Host Interface #3 data 6A.) Host Interface #4 Name : IPv4 : Netmask : Type (network/management) : network 6B.) Delete Host Interface #4 data 7A.) Host Interface #5 Name : IPv4 : Netmask : Type (network/management) : network 7B.) Delete Host Interface #5 data 8. ) Show Host current static route 9. ) Show Host candidate static route A. ) Add Host candidate static route B. ) Remove Host candidate static route ............................................. X.) Host current setting Y.) Apply Host static route only Z.) Apply Host setting and static route ............................................. ............................................. Please select a number to modify. [<CR>=return to main menu]: z Are you sure you want to setup host and static route configuration? This option will restart network services/interfaces (Y/N) y Current host/PCS network configuration: host current interface external0 IP: 10.25.153.6/255.255.254.0 host current interface internal0 IP: 172.16.16.1/255.255.255.0 host current default gateway: 10.25.152.1 Current host static route: [0] 10.25.158.0/24 via 10.25.152.2 dev external0 [1] 10.25.159.0/24 via 10.25.152.2 dev external0 Applying host configuration: /opt/northstar/data/net_setup.json Please wait ... Restart Networking ... Current host static route: [0] 10.25.158.0/24 via 10.25.152.2 dev external0 [1] 10.25.159.0/24 via 10.25.152.2 dev external0 Deleting current static routes ... Applying candidate static routes Static route has been added successfully for cmd 'ip route add 10.25.158.0/24 via 10.25.152.2' Static route has been added successfully for cmd 'ip route add 10.25.159.0/24 via 10.25.152.2' Host has been configured successfully
- Press Enter to return to the Main Menu.
Configure the JunosVM and its Interfaces
This section applies to physical server or two-VM installations that use Junos VM. If you are installing NorthStar using cRPD, skip this section and proceed to Configure Junos cRPD Settings.
This step is not required if you are doing an upgrade rather than a fresh installation.
From the Setup Main Menu, configure the JunosVM and its interfaces. Ping the JunosVM to ensure that it is up before attempting to configure it. The net_setup script uses IP 172.16.16.2 to access the JunosVM using the login name northstar.
- From the Main Menu, type B and
press Enter to display the JunosVM Configuration
menu:
Junos VM Configuration Settings: ******************************************************** In order to commit your changes you must select option Z ******************************************************** .................................................. 1. ) JunosVM hostname : northstar_junosvm 2. ) JunosVM default gateway : 3. ) BGP AS number : 100 4A.) JunosVM Interface #1 (external_interface) Name : em1 IPv4 : Netmask : Type(network/management) : network Bridge name : external0 4B.) Delete JunosVM Interface #1 (external_interface) data 5A.) JunosVM Interface #2 (mgmt_interface) Name : em2 IPv4 : Netmask : Type(network/management) : management Bridge name : mgmt0 5B.) Delete JunosVM Interface #2 (mgmt_interface) data 6A.) JunosVM Interface #3 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 6B.) Delete JunosVM Interface #3 data 7A.) JunosVM Interface #4 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 7B.) Delete JunosVM Interface #4 data 8A.) JunosVM Interface #5 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 8B.) Delete JunosVM Interface #5 data 9. ) Show JunosVM current static route A. ) Show JunosVM candidate static route B. ) Add JunosVM candidate static route C. ) Remove JunosVM candidate static route .................................................. X. ) JunosVM current setting Y. ) Apply JunosVM static route only Z. ) Apply JunosVM Setting and static route .................................................. Please select a number to modify. [<CR>=return to main menu]:
To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter.
- Type 1 and press Enter to configure the JunosVM hostname. The existing
JunosVM hostname is displayed. Type the new hostname and press Enter.
Please select a number to modify. [<CR>=return to main menu]: 1 current junosvm hostname : northstar_junosvm new junosvm hostname : junosvm_node1
- Type 2 and press Enter to configure the JunosVM default gateway. The
existing JunosVM default gateway IP address is displayed. Type the
new IP address and press Enter.
Please select a number to modify. [<CR>=return to main menu]: 2 current junosvm default_gateway : new junosvm default_gateway : 10.25.152.1
- Type 3 and press Enter to configure the JunosVM BGP AS number. The existing
JunosVM BGP AS number is displayed. Type the new BGP AS number and
press Enter.
Please select a number to modify. [<CR>=return to main menu]: 3 current junosvm AS Number : 100 new junosvm AS Number: 100
- Type 4A and press Enter to configure the JunosVM interface #1 (external_interface).
The first item of existing JunosVM interface #1 information is displayed.
Type each item of new information (interface name, IPv4 address, netmask,
type), and press Enter to proceed to the
next.
Note The designation of network or management for the type of interface is a label only, for your convenience. NorthStar Controller does not use this information.
Please select a number to modify. [<CR>=return to main menu]: 4A current junosvm interface1 name : em1 new junosvm interface1 name: em1 current junosvm interface1 ipv4 : new junosvm interface1 ipv4 : 10.25.153.144 current junosvm interface1 netmask : new junosvm interface1 netmask : 255.255.254.0 current junosvm interface1 type (network/management) : network new junosvm interface1 type (network/management) : network current junosvm interface1 bridge name : external0 new junosvm interface1 bridge name : external0
- Type B and press Enter to add a JunosVM candidate static route. The
existing JunosVM candidate static route (if any) is displayed. Type
the new candidate static route and press Enter.
Please select a number to modify. [<CR>=return to main menu]: B Candidate static route: new static route (format: x.x.x.x/xy via a.b.c.d): 10.25.158.0/24 via 10.25.152.2
- If you have more than one static route, type B and press Enter again
to add each additional route.
Please select a number to modify. [<CR>=return to main menu]: B Candidate static route: [0] 10.25.158.0/24 via 10.25.152.2 dev any new static route (format: x.x.x.x/xy via a.b.c.d): 10.25.159.0/24 via 10.25.152.2
Note If you are adding a route and not making any other additional configuration changes, you can use option Y on the menu to apply the JunosVM static route only, without restarting the NorthStar services.
- Type Z and press Enter to save your changes to the JunosVM configuration.
The following example shows saving the JunosVM configuration.
Junos VM Configuration Settings: ******************************************************** In order to commit your changes you must select option Z ******************************************************** .................................................. 1. ) JunosVM hostname : northstar_junosvm 2. ) JunosVM default gateway : 3. ) BGP AS number : 100 4A.) JunosVM Interface #1 (external_interface) Name : em1 IPv4 : Netmask : Type(network/management) : network Bridge name : external0 4B.) Delete JunosVM Interface #1 (external_interface) data 5A.) JunosVM Interface #2 (mgmt_interface) Name : em2 IPv4 : Netmask : Type(network/management) : management Bridge name : mgmt0 5B.) Delete JunosVM Interface #2 (mgmt_interface) data 6A.) JunosVM Interface #3 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 6B.) Delete JunosVM Interface #3 data 7A.) JunosVM Interface #4 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 7B.) Delete JunosVM Interface #4 data 8A.) JunosVM Interface #5 Name : IPv4 : Netmask : Type(network/management) : network Bridge name : 8B.) Delete JunosVM Interface #5 data 9. ) Show JunosVM current static route A. ) Show JunosVM candidate static route B. ) Add JunosVM candidate static route C. ) Remove JunosVM candidate static route .................................................. X.) JunosVM current setting Y.) Apply JunosVM static route only Z.) Apply JunosVM Setting and static route .................................................. Please select a number to modify. [<CR>=return to main menu]: z Are you sure you want to setup junosvm and static route configuration? (Y/N) y Current junosvm network configuration: junosvm current interface em0 IP: 10.16.16.2/255.255.255.0 junosvm current interface em1 IP: 10.25.153.144/255.255.254.0 junosvm current default gateway: 10.25.152.1 junosvm current asn: 100 Current junosvm static route: [0] 10.25.158.0/24 via 10.25.152.2 dev any [1] 10.25.159.0/24 via 10.25.152.2 dev any Applying junosvm configuration ... Please wait ... Commit Success. JunosVM has been configured successfully. Please wait ... Backup Current JunosVM config ... Connecting to JunosVM to backup the config ... Please check the result at /opt/northstar/data/junosvm/junosvm.conf JunosVm configuration has been successfully backed up
- Press Enter to return to the Main Menu.
Configure Junos cRPD Settings
From the Setup Main Menu, configure the Junos cRPD settings. This section applies only to cRPD installations (not to installations that use Junos VM).
- From the Main Menu, type B and
press Enter to display the Junos cRPD Configuration
menu:
Junos CRPD Configuration Settings: ******************************************************** In order to commit your changes you must select option Z ******************************************************** .................................................. 1. ) BGP AS number : 65412 2. ) BGP Monitor IPv4 Address : 172.25.153.154 3. ) BGP Monitor Port : 10001 .................................................. X. ) Junos CRPD current setting Z. ) Apply Junos CRPD Setting .................................................. Please select a number to modify. [<CR>=return to main menu]:
To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter. Notice that option Y in the lower section is omitted from this menu as it is not relevant to cRPD.
- Type 1 and press Enter to configure the BGP AS number. The existing
AS number is displayed. Type the new number and press Enter.
Please select a number to modify. [<CR>=return to main menu]: 1 current BGP AS Number : 65412 new BGP AS Number : 64525
- Type 2 and press Enter if you need to change the default BGP Monitor IPv4 Address. By default, BMP monitor runs on the same host as cRPD, and the address is configured based on the local address of the host. We therefore recommend not changing this address.
- Type 3 and press Enter if you need to change the default BGP Monitor Port. We recommend not changing this port from the default of 10001. The BMP monitor listens on port 10001 for incoming BMP connections from the network. The connection is opened from cRPD, which runs on the same host as the BMP monitor.
- Type Z and press Enter to save your configuration changes.
The following example show saving the Junos cRPD configuration.
Junos CRPD Configuration Settings: ******************************************************** In order to commit your changes you must select option Z ******************************************************** .................................................. 1. ) BGP AS number : 64525 2. ) BGP Monitor IPv4 Address : 172.17.153.154 3. ) BGP Monitor Port : 10001 .................................................. X. ) Junos CRPD current setting Z. ) Apply Junos CRPD Setting .................................................. Please select a number to modify. [<CR>=return to main menu]: z Are you sure you want to setup junos crpd configuration? (Y/N) y Current junos crpd configuration: junos crpd current bgp asn: 64525 junos crpd current bmp_host: 172.17.153.154 junos crpd bgp_port: 10001 Please wait ... Commit Success. Junos CRPD has been configured successfully.
Set Up the SSH Key for External JunosVM
This section only applies to two-VM installations. Skip this section if you are installing NorthStar using cRPD.
This step is not required if you are doing an upgrade rather than a fresh installation.
For a two-VM installation, you must set up the SSH key for the external JunosVM.
- From the Main Menu, type H and
press Enter.
Please select a number to modify. [<CR>=return to main menu]: H
Follow the prompts to provide your JunosVM username and router login class (super-user, for example). The script verifies your login credentials, downloads the JunosVM SSH key file, and returns you to the main menu.
For example:
Main Menu: ............................................. A.) Host Setting ............................................. B.) JunosVM Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. H.) Setup SSH Key for external JunosVM setup ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute. H Please provide JunosVM login: admin 2 VMs Setup is detected Script will create user: northstar. Please provide user northstar router login class e.g super-user, operator: super-user The authenticity of host '10.49.118.181 (10.49.118.181)' can't be established. RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx. Are you sure you want to continue connecting (yes/no)? yes Applying user northstar login configuration Downloading JunosVM ssh key file. Login to JunosVM Checking md5 sum. Login to JunosVM SSH key has been sucessfully updated Main Menu: ............................................. A.) Host Setting ............................................. B.) JunosVM Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. H.) Setup SSH Key for external JunosVM setup ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute. ............................................. Please select a letter to execute.
Upgrade the NorthStar Controller Software in an HA Environment
There are some special considerations for upgrading NorthStar Controller when you have an HA cluster configured. Use the following procedure:
- Before installing the new release of the NorthStar software,
ensure that all individual cluster members are working. On each node,
execute the supervisorctl status script:[root@node-1]# supervisorctl status
For an active node, all processes should be listed as RUNNING as shown in this example:
This is just an example. The actual list of processes varies according to the version of NorthStar on the node, your deployment setup, and the optional features installed.
[root@node-1 ~]# supervisorctl status
bmp:bmpMonitor RUNNING pid 2957, uptime 0:58:02 collector:worker1 RUNNING pid 19921, uptime 0:01:42 collector:worker2 RUNNING pid 19923, uptime 0:01:42 collector:worker3 RUNNING pid 19922, uptime 0:01:42 collector:worker4 RUNNING pid 19924, uptime 0:01:42 collector:main:beat_scheduler RUNNING pid 19925, uptime 0:01:42 collector_main:es_publisher RUNNING pid 19771, uptime 0:01:53 collector_main:task_scheduler RUNNING pid 19772, uptime 0:01:53 config:cmgd RUNNING pid 22087, uptime 0:01:53 config:cmgd-rest RUNNING pid 22088, uptime 0:01:53 docker:dockerd RUNNING pid 4368, uptime 0:57:34 epe:epeplanner RUNNING pid 9047, uptime 0:50:34 infra:cassandra RUNNING pid 2971, uptime 0:58:02 infra:ha_agent RUNNING pid 9009, uptime 0:50:45 infra:healthmonitor RUNNING pid 9172, uptime 0:49:40 infra:license_monitor RUNNING pid 2968, uptime 0:58:02 infra:prunedb RUNNING pid 19770, uptime 0:01:53 infra:rabbitmq RUNNING pid 7712, uptime 0:52:03 infra:redis_server RUNNING pid 2970, uptime 0:58:02 infra:zookeeper RUNNING pid 2965, uptime 0:58:02 ipe:ipe_app RUNNING pid 2956, uptime 0:58:02l istener1:listener1_00 RUNNING pid 9212, uptime 0:49:29 netconf:netconfd_00 RUNNING pid 19768, uptime 0:01:53 northstar:anycastGrouper RUNNING pid 19762, uptime 0:01:53 northstar:configServer RUNNING pid 19767, uptime 0:01:53 northstar:mladapter RUNNING pid 19765, uptime 0:01:53 northstar:npat RUNNING pid 19766, uptime 0:01:53 northstar:pceserver RUNNING pid 19441, uptime 0:02:59 northstar:privatet1vproxy RUNNING pid 19432, uptime 0:02:59 northstar:prpdclient RUNNING pid 19763, uptime 0:01:53 northstar:scheduler RUNNING pid 19764, uptime 0:01:53 northstar:topologyfilter RUNNING pid 19760, uptime 0:01:53 northstar:toposerver RUNNING pid 19762, uptime 0:01:53 northstar_pcs:PCServer RUNNING pid 19487, uptime 0:02:49 northstar_pcs:PCViewer RUNNING pid 19486, uptime 0:02:49 northstar_pcs:SRPCServer RUNNING pid 19490, uptime 0:02:49 web:app RUNNING pid 19273, uptime 0:03:18 web:gui RUNNING pid 19280, uptime 0:03:18 web:notification RUNNING pid 19272, uptime 0:03:18 web:proxy RUNNING pid 19275, uptime 0:03:18 web:restconf RUNNING pid 19271, uptime 0:03:18 web:resthandler RUNNING pid 19275, uptime 0:03:18
For a standby node, processes beginning with “northstar” and “northstar_pcs”should be listed as STOPPED. Also, if you have analytics installed, some of the processes beginning with “collector” are STOPPED. Other processes, including those needed to preserve connectivity, remain RUNNING. An example is shown here.
Note This is just an example; the actual list of processes varies according to the version of NorthStar on the node, your deployment setup, and the optional features installed.
[root@node-1 ~]# supervisorctl status
bmp:bmpMonitor RUNNING pid 2957, uptime 0:58:02 collector:worker1 RUNNING pid 19921, uptime 0:01:42 collector:worker2 RUNNING pid 19923, uptime 0:01:42 collector:worker3 RUNNING pid 19922, uptime 0:01:42 collector:worker4 RUNNING pid 19924, uptime 0:01:42 collector:main:beat_scheduler STOPPED Dec 24, 05:12 AM collector_main:es_publisher STOPPED Dec 24, 05:12 AM collector_main:task_scheduler STOPPED Dec 24, 05:12 AM config:cmgd STOPPED Dec 24, 05:12 AM config:cmgd-rest STOPPED Dec 24, 05:12 AM docker:dockerd RUNNING pid 4368, uptime 0:57:34 epe:epeplanner RUNNING pid 9047, uptime 0:50:34 infra:cassandra RUNNING pid 2971, uptime 0:58:02 infra:ha_agent RUNNING pid 9009, uptime 0:50:45 infra:healthmonitor RUNNING pid 9172, uptime 0:49:40 infra:license_monitor RUNNING pid 2968, uptime 0:58:02 infra:prunedb STOPPED Dec 24, 05:12 AM infra:rabbitmq RUNNING pid 7712, uptime 0:52:03 infra:redis_server RUNNING pid 2970, uptime 0:58:02 infra:zookeeper RUNNING pid 2965, uptime 0:58:02 ipe:ipe_app STOPPED Dec 24, 05:12 AM listener1:listener1_00 RUNNING pid 9212, uptime 0:49:29 netconf:netconfd_00 RUNNING pid 19768, uptime 0:01:53 northstar:anycastGrouper STOPPED Dec 24, 05:12 AM northstar:configServer STOPPED Dec 24, 05:12 AM northstar:mladapter STOPPED Dec 24, 05:12 AM northstar:npat STOPPED Dec 24, 05:12 AM northstar:pceserver STOPPED Dec 24, 05:12 AM northstar:privatet1vproxy STOPPED Dec 24, 05:12 AM northstar:prpdclient STOPPED Dec 24, 05:12 AM northstar:scheduler STOPPED Dec 24, 05:12 AM northstar:topologyfilter STOPPED Dec 24, 05:12 AM northstar:toposerver STOPPED Dec 24, 05:12 AM northstar_pcs:PCServer STOPPED Dec 24, 05:12 AM northstar_pcs:PCViewer STOPPED Dec 24, 05:12 AM northstar_pcs:SRPCServer STOPPED Dec 24, 05:12 AM web:app STOPPED Dec 24, 05:12 AM web:gui STOPPED Dec 24, 05:12 AM web:notification STOPPED Dec 24, 05:12 AM web:proxy STOPPED Dec 24, 05:12 AM web:restconf STOPPED Dec 24, 05:12 AM web:resthandler STOPPED Dec 24, 05:12 AM
- Ensure that the SSH keys for HA are set up. To test this, try to SSH from each node to every other node in the cluster using user “root”. If the SSH keys for HA are set up, you will not be prompted for a password. If you are prompted for a password, see Configuring a NorthStar Cluster for High Availability for the procedure to set up the SSH keys.
- On one of the standby nodes, install the new release of
the NorthStar software according to the instructions at the beginning
of this topic. Check the processes on this node before proceeding
to the other standby node(s) by executing the supervisorctl
status script.[root@node-1]# supervisorctl status
Since the node comes up as a standby node, some processes will be STOPPED, but the “infra” group of processes, the “listener1” process, the “collector:worker” group of processes (if you have them), and the “junos:junosvm” process (if you have it) should be RUNNING. Wait until those processes are running before proceeding to the next node.
- Repeat this process on each of the remaining standby nodes, one by one, until all standby nodes have been upgraded.
- On the active node, restart the ha-agent process to trigger
a switchover to a standby node. [root@node-2]# supervisorctl restart infra:ha_agent
One of the standby nodes becomes active and the previously active node switches to standby mode.
- On the previously active node, install the new release of the NorthStar software according to the instructions at the beginning of this section. Check the processes in this node using supervisorctl status; their status (RUNNING or STOPPED) should be consistent with the node’s new standby role.
The newly upgraded software automatically inherits the net_setup settings, HA configurations, and all credentials from the previous installation. Therefore, it is not necessary to re-run net_setup unless you want to change settings, HA configurations, or password credentials.