ON THIS PAGE
Configuring a NorthStar Cluster for High Availability
Before You Begin
Configuring a NorthStar application cluster for high availability (HA) is an optional process. This topic describes the steps for configuring, testing, deploying, and maintaining an HA cluster. If you are not planning to use the NorthStar application HA feature, you can skip this topic.
See High Availability Overview in the NorthStar Controller User Guide for overview information about HA. For information about analytics HA, see Installing Data Collectors for Analytics.
Throughout your use of NorthStar Controller HA, be aware that you must replicate any changes you make to northstar.cfg to all cluster nodes so the configuration is uniform across the cluster. NorthStar CLI configuration changes, on the other hand, are replicated across the cluster nodes automatically.
Download the NorthStar Controller and install it on each server that will be part of the cluster. Each server must be completely enabled as a single node implementation before it can become part of a cluster.
This includes:
Creating passwords
License verification steps
Connecting to the network for various protocol establishments such as PCEP or BGP-LS
Note All of the servers must be configured with the same database and RabbitMQ passwords.
All server time must be synchronized by NTP using the following procedure:
- Install NTP.
yum -y install ntp
- Specify the preferred NTP server in ntp.conf.
- Verify the configuration.
ntpq -p
Note All cluster nodes must have the same time zone and system time settings. This is important to prevent inconsistencies in the database storage of SNMP and LDP task collection delta values.
- Install NTP.
Run the net_setup.py utility to complete the required elements of the host and JunosVM configurations. Keep that configuration information available.
Note If you are using an OpenStack environment, you will have one JunosVM that corresponds to each NorthStar Controller VM.
Know the virtual IPv4 address you want to use for Java Planner client and web UI access to NorthStar Controller (required). This VIP address is configured for the router-facing network for single interface configurations, and for the user-facing network for dual interface configurations. This address is always associated with the active node, even if failover causes the active node to change.
A virtual IP (VIP) is required when setting up a NorthStar cluster. Ensure that all servers that will be in the cluster are part of the same subnet as the VIP.
Decide on the priority that each node will have for active node candidacy upon failover. The default value for all nodes is 0, the highest priority. If you want all nodes to have equal priority for becoming the active node, you can just accept the default value for all nodes. If you want to rank the nodes in terms of their active node candidacy, you can change the priority values accordingly—the lower the number, the higher the priority.
Set Up SSH Keys
Set up SSH keys between the selected node and each of the other nodes in the cluster, and each JunosVM.
- Obtain the public SSH key from one of the nodes. You will
need the ssh-rsa string from the output:
[root@rw01-ns ~]# cat /root/.ssh/id_rsa.pub
- Copy the public SSH key from each node to each of the
other nodes, from each machine.
From node 1:
[root@rw01-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-2-ip [root@rw01-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-3-ip
From node 2:
[root@rw02-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-1-ip [root@rw02-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-3-ip
From node 3:
[root@rw03-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-1-ip [root@rw03-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-2-ip
- Copy the public SSH key from the selected node to each
remote JunosVM (JunosVM hosted on each other node). To do this, log
in to each of the other nodes and connect to its JunosVM.
[root@rw02-ns ~]# ssh northstar@JunosVM-ip [root@rw02-ns ~]# configure [root@rw02-ns ~]# set system login user northstar authentication ssh-rsa replacement-string [root@rw02-ns ~]# commit
[root@rw03-ns ~]# ssh northstar@JunosVM-ip [root@rw03-ns ~]# configure [root@rw03-ns ~]# set system login user northstar authentication ssh-rsa replacement-string [root@rw03-ns ~]# commit
Access the HA Setup Main Menu
The /opt/northstar/utils/net_setup.py utility (the same utility you use to configure NorthStar Controller) includes an option for configuring high availability (HA) for a node cluster. Run the /opt/northstar/utils/net_setup.py utility on one of the servers in the cluster to set up the entire cluster.
- Select one of the nodes in the cluster on which to run the setup utility to configure all the nodes in the cluster.
- On the selected node, launch the NorthStar setup utility
to display the NorthStar Controller Setup Main Menu.
[root@northstar]# /opt/northstar/utils/net_setup.py
Main Menu: ............................................. A.) Host Setting ............................................. B.) JunosVM Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. H.) Setup SSH Key for external JunosVM setup ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute.
- Type E and press Enter to display the HA Setup main menu.
Figure 1 shows the top portion of the HA Setup main menu in which the current configuration is listed. It includes the five supported interfaces for each node, the VIP addresses, and the ping interval and timeout values. In this figure, only the first of the nodes is included, but you would see the corresponding information for all three of the nodes in the cluster configuration template. HA functionality requires an odd number of nodes in a cluster, and a minimum of three.
Note If you have a cRPD installation, the JunosVM information is not displayed as it is not applicable.
Figure 1: HA Setup Main Menu, Top Portion Note If you are configuring a cluster for the first time, the IP addresses are blank and other fields contain default values. If you are modifying an existing configuration, the current cluster configuration is displayed, and you have the opportunity to change the values.
Note If the servers are located in geodiverse locations, you can use Site Name to indicate which servers are in the same or different geographical locations.
Figure 2 shows the lower portion of the HA Setup main menu. To complete the configuration, you type the number or letter of an option and provide the requested information. After each option is complete, you are returned to the HA Setup main menu so you can select another option.
Figure 2: HA Setup Main Menu, Lower Portion Note If you have a cRPD installation, options 3, 4, and 8 are not displayed as they are not applicable. The remaining options are not renumbered.
Configure the Three Default Nodes and Their Interfaces
The HA Setup main menu initially offers three nodes for configuration because a cluster must have a minimum of three nodes. You can add more nodes as needed.
For each node, the menu offers five interfaces. Configure as many of those as you need.
- Type 5 and press Enter to modify the first node.
- When prompted, enter the number of the node to be modified,
the hostname, the site name, and the priority, pressing Enter between entries.
Note The NorthStar Controller uses root as a username to access other nodes.
The default priority is 0. You can just press Enter to accept the default or you can type a new value.
For each interface, enter the interface name, IPv4 address, and switchover (yes/no), pressing Enter between entries.
Note For each node, interface #1 is reserved for the cluster communication interface which is used to facilitate communication between nodes. For this interface, it is required that switchover be set to Yes, and you cannot change that parameter.
When finished, you are returned to the HA Setup main menu.
The following example configures Node #1 and two of its available five interfaces.
Please select a number to modify. [<CR>=return to main menu] 5 Node ID : 1 HA Setup: .......................................................... Node #1 Hostname : Site Name : site1 Priority : 0 Cluster Communication Interface : external0 Cluster Communication IP : Interfaces Interface #1 Name : external0 IPv4 : Switchover : yes Interface #2 Name : mgmt0 IPv4 : Switchover : yes Interface #3 Name : IPv4 : Switchover : yes Interface #4 Name : IPv4 : Switchover : yes Interface #5 Name : IPv4 : Switchover : yes current node 1 Node hostname (without domain name) : new node 1 Node hostname (without domain name) : node-1 current node 1 Site Name : site1 new node 1 Site Name : site1 current node 1 Node priority : 0 new node 1 Node priority : 10 current node 1 Node cluster communication interface : external0 new node 1 Node cluster communication interface : external0 current node 1 Node cluster communication IPv4 address : new node 1 Node cluster communication IPv4 address : 10.25.153.6 current node 1 Node interface #2 name : mgmt0 new node 1 Node interface #2 name : external1 current node 1 Node interface #2 IPv4 address : new node 1 Node interface #2 IPv4 address : 10.100.1.1 current node 1 Node interface #2 switchover (yes/no) : yes new node 1 Node interface #2 switchover (yes/no) : current node 1 Node interface #3 name : new node 1 Node interface #3 name : current node 1 Node interface #3 IPv4 address : new node 1 Node interface #3 IPv4 address : current node 1 Node interface #3 switchover (yes/no) : yes new node 1 Node interface #3 switchover (yes/no) : current node 1 Node interface #4 name : new node 1 Node interface #4 name : current node 1 Node interface #4 IPv4 address : new node 1 Node interface #4 IPv4 address : current node 1 Node interface #4 switchover (yes/no) : yes new node 1 Node interface #4 switchover (yes/no) : current node 1 Node interface #5 name : new node 1 Node interface #5 name : current node 1 Node interface #5 IPv4 address : new node 1 Node interface #5 IPv4 address : current node 1 Node interface #5 switchover (yes/no) : yes new node 1 Node interface #5 switchover (yes/no) :
- Type 5 and press Enter again to repeat the data entry for each of the other two nodes.
Configure the JunosVM for Each Node
To complete the node-specific setup, configure the JunosVM for each node in the cluster.
- From the HA Setup main menu, type 8 and press Enter to modify the JunosVM for a node.
- When prompted, enter the node number, the JunosVM hostname,
and the JunosVM IPv4 address, pressing Enter between entries.
Figure 3 shows these JunosVM setup fields.
Figure 3: Node 1 JunosVM Setup Fields When finished, you are returned to the HA Setup main menu.
- Type 8 and press Enter again to repeat the JunosVM data entry for each of the other two nodes.
(Optional) Add More Nodes to the Cluster
If you want to add additional nodes, type 1 and press Enter. Then configure the node and the node’s JunosVM using the same procedures previously described. Repeat the procedures for each additional node.
HA functionality requires an odd number of nodes and a minimum of three nodes per cluster.
The following example shows adding an additional node, node #4, with two interfaces.
Please select a number to modify. [<CR>=return to main menu]: 1 New Node ID : 4 current node 4 Node hostname (without domain name) : new node 4 Node hostname (without domain name) : node-4 current node 4 Site Name : site1 new node 4 Site Name : site1 current node 4 Node priority : 0 new node 4 Node priority : 40 current node 4 Node cluster communication interface : external0 new node 4 Node cluster communication interface : external0 current node 4 Node cluster communication IPv4 address : new node 4 Node cluster communication IPv4 address : 10.25.153.12 current node 4 Node interface #2 name : mgmt0 new node 4 Node interface #2 name : external1 current node 4 Node interface #2 IPv4 address : new node 4 Node interface #2 IPv4 address : 10.100.1.7 current node 4 Node interface #2 switchover (yes/no) : yes new node 4 Node interface #2 switchover (yes/no) : current node 4 Node interface #3 name : new node 4 Node interface #3 name : current node 4 Node interface #3 IPv4 address : new node 4 Node interface #3 IPv4 address : current node 4 Node interface #3 switchover (yes/no) : yes new node 4 Node interface #3 switchover (yes/no) : current node 4 Node interface #4 name : new node 4 Node interface #4 name : current node 4 Node interface #4 IPv4 address : new node 4 Node interface #4 IPv4 address : current node 4 Node interface #4 switchover (yes/no) : yes new node 4 Node interface #4 switchover (yes/no) : current node 4 Node interface #5 name : new node 4 Node interface #5 name : current node 4 Node interface #5 IPv4 address : new node 4 Node interface #5 IPv4 address : current node 4 Node interface #5 switchover (yes/no) : yes new node 4 Node interface #5 switchover (yes/no) :
The following example shows configuring the JunosVM that corresponds to node #4.
Please select a number to modify. [<CR>=return to main menu] 3 New JunosVM ID : 4 current junosvm 4 JunOSVM hostname : new junosvm 4 JunOSVM hostname : junosvm-4 current junosvm 4 JunOSVM IPv4 address : new junosvm 4 JunOSVM IPv4 address : 10.25.153.13
Configure Cluster Settings
The remaining settings apply to the cluster as a whole.
- From the HA Setup main menu, type 9 and press Enter to configure the VIP
address for the external (router-facing) network. This is the virtual
IP address that is always associated with the active node, even if
failover causes the active node to change. The VIP is required, even
if you are configuring a separate user-facing network interface. If
you have upgraded from an earlier NorthStar release in which you did
not have VIP for external0, you must now configure it.
Note Make a note of this IP address. If failover occurs while you are working in the NorthStar Planner UI, the client is disconnected and you must re-launch it using this VIP address. For the NorthStar Controller web UI, you would be disconnected and would need to log back in.
The following example shows configuring the VIP address for the external network.
Please select a number to modify. [<CR>=return to main menu] 9 current VIP interface #1 IPv4 address : new VIP interface #1 IPv4 address : 10.25.153.100 current VIP interface #2 IPv4 address : new VIP interface #2 IPv4 address : 10.100.1.1 current VIP interface #3 IPv4 address : new VIP interface #3 IPv4 address : current VIP interface #4 IPv4 address : new VIP interface #4 IPv4 address : current VIP interface #5 IPv4 address : new VIP interface #5 IPv4 address :
- Type 9 and press Enter to configure the VIP address for the user-facing network for dual interface configurations. If you do not configure this IP address, the router-facing VIP address also functions as the user-facing VIP address.
- Type D and press Enter to configure the setup mode as cluster (local cluster).
- Type E and press Enter to configure the PCEP session. The default is physical_ip. If you are using the cluster VIP for your
PCEP session, configure the PCEP session as vip.
Note All of your PCC sessions must use either physical IP or VIP (no mixing and matching), and that must also be reflected in the PCEP configuration on the router.
Test and Deploy the HA Configuration
You can test and deploy the HA configuration from within the HA Setup main menu.
- Type G to test the HA connectivity for all the interfaces. You must verify that all interfaces are up before you deploy the HA cluster.
- Type H and press Enter to launch a script that connects to and deploys
all the servers and all the JunosVMs in the cluster. The process takes
approximately 15 minutes, after which the display is returned to the
HA Setup menu. You can view the log of the progress at /opt/northstar/logs/net_setup.log.
Note If the execution has not completed within 30 minutes, a process might be stuck. You can sometimes see this by examining the log at /opt/northstar/logs/net_setup.log. You can press Ctrl-C to cancel the script, and then restart it.
- To check if the election process has completed, examine
the processes running on each node by logging into the node and executing
the supervisorctl status script.[root@node-1]# supervisorctl status
For the active node, you should see all processes listed as RUNNING as shown here.
Note The actual list of processes depends on the version of NorthStar and your deployment setup.
[root@node-1 ~]# supervisorctl status
bmp:bmpMonitor RUNNING pid 2957, uptime 0:58:02 collector:worker1 RUNNING pid 19921, uptime 0:01:42 collector:worker2 RUNNING pid 19923, uptime 0:01:42 collector:worker3 RUNNING pid 19922, uptime 0:01:42 collector:worker4 RUNNING pid 19924, uptime 0:01:42 collector_main:es_publisher RUNNING pid 19771, uptime 0:01:53 collector_main:task_scheduler RUNNING pid 19772, uptime 0:01:53 config:ns_config_monitor RUNNING pid 19129, uptime 0:03:19 docker:dockerd RUNNING pid 4368, uptime 0:57:34 epe:epeplanner RUNNING pid 9047, uptime 0:50:34 infra:cassandra RUNNING pid 2971, uptime 0:58:02 infra:ha_agent RUNNING pid 9009, uptime 0:50:45 infra:healthmonitor RUNNING pid 9172, uptime 0:49:40 infra:license_monitor RUNNING pid 2968, uptime 0:58:02 infra:prunedb RUNNING pid 19770, uptime 0:01:53 infra:rabbitmq RUNNING pid 7712, uptime 0:52:03 infra:redis_server RUNNING pid 2970, uptime 0:58:02 infra:zookeeper RUNNING pid 2965, uptime 0:58:02 junos:junosvm RUNNING pid 2956, uptime 0:58:02 listener1:listener1_00 RUNNING pid 9212, uptime 0:49:29 netconf:netconfd RUNNING pid 19768, uptime 0:01:53 northstar:configServer RUNNING pid 19767, uptime 0:01:53 northstar:mladapter RUNNING pid 19765, uptime 0:01:53 northstar:npat RUNNING pid 19766, uptime 0:01:53 northstar:pceserver RUNNING pid 19441, uptime 0:02:59 northstar:prpdclient RUNNING pid 19763, uptime 0:01:53 northstar:scheduler RUNNING pid 19764, uptime 0:01:53 northstar:toposerver RUNNING pid 19762, uptime 0:01:53 northstar_pcs:PCServer RUNNING pid 19487, uptime 0:02:49 northstar_pcs:PCViewer RUNNING pid 19486, uptime 0:02:49 web:app RUNNING pid 19273, uptime 0:03:18 web:proxy RUNNING pid 19275, uptime 0:03:18
For a standby node, processes beginning with “northstar”, “northstar_pcs”, and “netconf” should be listed as STOPPED. Also, if you have analytics installed, some of the processes beginning with “collector” are STOPPED. Other processes, including those needed to preserve connectivity, remain RUNNING. An example is shown here.
Note This is just an example; the actual list of processes depends on the version of NorthStar, your deployment setup, and the optional features you have installed.
[root@node-1 ~]# supervisorctl status
bmp:bmpMonitor RUNNING pid 8755, uptime 3:36:02 collector:worker1 RUNNING pid 31852, uptime 0:06:59 collector:worker2 RUNNING pid 31854, uptime 0:06:59 collector:worker3 RUNNING pid 31853, uptime 0:06:59 collector:worker4 RUNNING pid 31855, uptime 0:06:59 collector_main:es_publisher STOPPED Apr 07 04:08 PM collector_main:task_scheduler STOPPED Apr 07 04:08 PM config:ns_config_monitor STOPPED Apr 07 04:08 PM docker:dockerd RUNNING pid 10187, uptime 3:35:35 epe:epeplanner RUNNING pid 15071, uptime 3:27:17 infra:cassandra RUNNING pid 8769, uptime 3:36:02 infra:ha_agent RUNNING pid 31401, uptime 0:08:31 infra:healthmonitor RUNNING pid 31784, uptime 0:07:14 infra:license_monitor RUNNING pid 8766, uptime 3:36:02 infra:prunedb STOPPED Apr 07 04:10 PM infra:rabbitmq RUNNING pid 13819, uptime 3:28:47 infra:redis_server RUNNING pid 8768, uptime 3:36:02 infra:zookeeper RUNNING pid 8763, uptime 3:36:02 junos:junosvm RUNNING pid 8754, uptime 3:36:02 listener1:listener1_00 RUNNING pid 31838, uptime 0:07:03 netconf:netconfd STOPPED Apr 07 04:08 PM northstar:configServer STOPPED Apr 07 04:08 PM northstar:mladapter STOPPED Apr 07 04:08 PM northstar:npat STOPPED Apr 07 04:08 PM northstar:pceserver STOPPED Apr 07 04:08 PM northstar:prpdclient STOPPED Apr 07 04:08 PM northstar:scheduler STOPPED Apr 07 04:08 PM northstar:toposerver STOPPED Apr 07 04:08 PM northstar_pcs:PCServer STOPPED Apr 07 04:08 PM northstar_pcs:PCViewer STOPPED Apr 07 04:08 PM web:app STOPPED Apr 07 04:09 PM web:proxy STOPPED Apr 07 04:09 PM
- Set the web UI admin password using either the web UI
or net_setup.
For the web UI method, use the external IP address that was provided to you when you installed the NorthStar application. Type that address into the address bar of your browser (for example, https://10.0.1.29:8443). A window is displayed requesting the confirmation code in your license file (the characters after S-NS-SDN=), and the password you wish to use. See Figure 4.
Figure 4: Web UI Method for Setting the Web UI Password For the net_setup method, select D from the net_setup Main Menu (Maintenance & Troubleshooting), and then 3 from the Maintenance & Troubleshooting menu (Change UI Admin Password).
Main Menu: ............................................. A.) Host Setting ............................................. B.) JunosVM Setting ............................................. C.) Check Network Setting ............................................. D.) Maintenance & Troubleshooting ............................................. E.) HA Setting ............................................. F.) Collect Trace/Log ............................................. G.) Analytics Data Collector Setting (External standalone/cluster analytics server) ............................................. H.) Setup SSH Key for external JunosVM setup ............................................. I.) Internal Analytics Setting (HA) ............................................. X.) Exit ............................................. Please select a letter to execute. D Maintenance & Troubleshooting: .................................................. 1.) Backup JunosVM Configuration 2.) Restore JunosVM Configuration 3.) Change UI Admin Password 4.) Change Database Password 5.) Change MQ Password 6.) Change Host Root Password 7.) Change JunosVM root and northstar User Password 8.) Initialize all credentials ( 3,4,5,6,7 included) .................................................. Please select a number to modify. [<CR>=return to main menu]: 3
Type Y to confirm you wish to change the UI Admin password, and enter the new password when prompted.
Change UI Admin Password Are you sure you want to change the UI Admin password? (Y/N) y Please enter new UI Admin password : Please confirm new UI Admin password : Changing UI Admin password ... UI Admin password has been changed successfully
- Once the web UI admin password has been set, return to
the HA Setup menu (select E from the Main
Menu). View cluster information and check the cluster status by typing K, and pressing Enter. In
addition to providing general cluster information, this option launches
the ns_check_cluster.sh script. You can also run this script outside
of the setup utility by executing the following commands:
[root@northstar]# cd /opt/northstar/utils/ [root@northstar utils]# ./ns_check_cluster.sh
Replace a Failed Node if Necessary
On the HA Setup menu, options I and J can be used when physically replacing a failed node. They allow you to replace a node without having to redeploy the entire cluster which would wipe out all the data in the database.
While a node is being replaced in a three-node cluster, HA is not guaranteed.
- Replace the physical node in the network and install NorthStar Controller on the replacement node.
- Run the NorthStar setup utility to configure the replaced
node with the necessary IP addresses. Be sure you duplicate the previous
node setup, including:
IP address and hostname
Initialization of credentials
Licensing
Network connectivity
- Go to one of the existing cluster member nodes (preferably the same node that was used to configure the HA cluster initially). Going forward, we will refer to this node as the anchor node.
- Set up the SSH key from the anchor node to the replacement
node and JunosVM.
Copy the public SSH key from the anchor node to the replacement node, from the replacement node to the other cluster nodes, and from the other cluster nodes to the replacement node.
Note Remember that in your initial HA setup, you had to copy the public SSH key from each node to each of the other nodes, from each machine.
Copy the public SSH key from the anchor node to the replacement node’s JunosVM (the JunosVM hosted on each of the other nodes). To do this, log in to each of the replacement nodes and connect to its JunosVM.
[root@node-1 ~]# ssh northstar@JunosVM-ip [root@node-1 ~]# configure [root@node-1 ~]# set system login user northstar authentication ssh-rsa replacement-string [root@node-1 ~]# commit
- From the anchor node, remove the failed node from the
Cassandra database. Run the command
nodetool removenode host-id
. To check the status, run the commandnodetool status
.The following example shows removing the failed node with IP address 10.25.153.10.
[root@node-1 ~]# . /opt/northstar/northstar.env
[root@node-1 ~]# nodetool status
Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 10.25.153.6 5.06 MB 256 ? 507e572c-0320-4556-85ec-443eb160e9ba rack1 UN 10.25.153.8 651.94 KB 256 ? cd384965-cba3-438c-bf79-3eae86b96e62 rack1 DN 10.25.153.10 4.5 MB 256 ? b985bc84-e55d-401f-83e8-5befde50fe96 rack1
[root@node-1 ~]# nodetool removenode b985bc84-e55d-401f-83e8-5befde50fe96
[root@node-1 ~]# nodetool status
Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 10.25.153.6 5.06 MB 256 ? 507e572c-0320-4556-85ec-443eb160e9ba rack1 UN 10.25.153.8 639.61 KB 256 ? cd384965-cba3-438c-bf79-3eae86b96e62 rack1
- From the HA Setup menu on the anchor node, select option I to copy the HA configuration to the replacement node.
- From the HA Setup menu on the anchor node, select option J to deploy the HA configuration, only on the replacement node.
Configure Fast Failure Detection Between JunosVM and PCC
You can use Bidirectional Forward Detection (BFD) in deploying the NorthStar application to provide faster failure detection as compared to BGP or IGP keepalive and hold timers. The BFD feature is supported in PCC and JunosVM.
To utilize this feature, configure bfd-liveness-detection minimum-interval milliseconds on the PCC, and mirror this configuration on the JunosVM. We recommend a value of 1000 ms or higher for each cluster node. Ultimately, the appropriate BFD value depends on your requirements and environment.
Related Documentation
High Availability Overview (NorthStar Controller User Guide)