Supported SBR Carrier SSR Cluster Configurations
A Starter Kit licenses you for two management node processes and two data node processes. SBR node processes are not included with the Starter Kit and must be purchased separately. You can collocate one or both SBR node processes with the management node processes, thereby creating one or two SBR/management combination nodes. You can add a third management node process to the cluster using the Management Node Expansion Kit. You always add data nodes in pairs. One Data Node Expansion Kit includes two data nodes.
Each node type (s), (sm), (m), (d) requires its own machine.
You can have no more than three management node processes in a cluster.
The minimum requirements for a cluster are two SBR node processes (full licenses) and a Starter Kit license. You can deploy these licenses in a configuration of four, five, or six machines, depending on how many SBR node processes are collocated with management node processes on a single machine. Each SBR node process is licensed separately; you cannot share SBR licenses. Starter Kit licenses are shared by all processes in the cluster (through the configure script). Table 7 lists the possible configurations for the minimum requirements of two SBR node licenses and a Starter Kit license.
If all add-on products are added to the Starter Kit cluster, the maximum size of a data cluster is four data nodes, three management nodes, and up to 20 SBR Carrier nodes, as shown in Table 7.
Setting up an unsupported configuration can put data and equipment at risk and is not supported by Juniper Networks.
Also, note the latency limitation in Table 6. We do not support cluster configurations with latency between nodes that exceeds 20 ms, as can occur if servers are set up to spread a cluster across widely separated locations.
Table 7: Supported Cluster Configurations
Licenses | Node Type (One Machine Required for Each Node Type) | |||
---|---|---|---|---|
(S) | (SM) | (M) | (D) | |
Configuration 1: Two SBR node processes (full licenses) and one Starter Kit (minimum configuration) | – | Two | – | Two |
Configuration 2: Two SBR node processes (full licenses) and one Starter Kit | One | One | One | Two |
Configuration 3: Two SBR node processes (full licenses) and one Starter Kit | Two | None | Two | Two |
Configuration 1, 2, 3 with one Data Expansion Kit | – | – | – | Up to Four |
Configuration 1, 2, 3 with one Management Node Expansion Kit | – | – | Up to Three | – |
Maximum configuration: Any of the previously listed configurations and additional SBR Nodes (front ends) | Up to a total of 20 | – | Up to Three | Up to Four |
Failover Overview
Failover Overview
To continue functioning without a service interruption after a component failure, a cluster requires at least 50 percent of its data and management nodes to be functional. If more than 50 percent of the data nodes fail, expect a service interruption, but continued operation of the available nodes.
Because SBR Carrier nodes function as front ends to the data cluster, they are not involved in any failover operations performed by the data cluster. However, as an administrator, you need to ensure that the front end environment is configured so that it can survive the loss of SBR Carrier nodes. (We recommend using an emergency IP address pool and running with a RADIUS-aware load balancer.)
A data cluster prepares for failover automatically when the cluster starts. During startup, two events occur:
One of the data nodes (usually the node with the lowest node ID) becomes the master of the node group. The master node stores the authoritative copy of the database.
One data node or management node is elected arbitrator. The arbitrator is responsible for conducting elections among the survivors to determine roles in case of node failures.
In a cluster, each management node and data node is allocated a vote that is used during this startup election and during failover operations. One management node is selected as the initial arbitrator of failover problems and of elections that result from them.
Within the cluster, data and management nodes monitor each other to detect communications loss and heartbeat failure. When either type of failure is detected, as long as nodes with more than 50 percent of the votes are operating, there is instantaneous failover and no service interruption. If exactly 50 percent of nodes and votes are lost, and if a data node is one of the lost nodes, the cluster determines which half of the database is to remain in operation. The half with the arbitrator (which usually includes the master node) stays up and the other half shuts down to prevent each node or node group from updating information independently and then restarts and attempts to rejoin the active half of the cluster.
When a failed data node (or nodes) returns to service, the working nodes resynchronize the current data with the restored nodes so all data nodes are up to date. How quickly this takes place depends on the current load on the cluster, the length of time the nodes were offline, and other factors.
Failover Examples
Failover Examples
The following examples are based on the basic Starter Kit deployment setup with the recommended redundant network as shown in Figure 5. The cluster is set up in a single data center on a fully switched, redundant, layer 2 network. Each of the nodes is connected to two switches using the Solaris IP-multipathing feature for interface failover. The switches have a back-to-back connection.

Possible Failure Scenarios
With these basic configurations, a high level of redundancy is supported. So long as one data node is available to one SBR Carrier node, the cluster is viable and functional.
If either SBR Carrier Server 1 or 2, which also run the cluster’s management nodes, (s1 and m1 or s2 and m2) goes down, the effect on the facility and cluster is:
No AAA service impact.
NADs (depending on the failover mechanism in the device) switch to their secondary targets—the remaining SBR Carrier Server. Recovery of the NAD when the SBR Carrier Server returns to service depends on NAD implementation.
If either data node A or B goes down, the effect is:
No AAA service impact; both SBR Carrier nodes continue operation using the surviving data node.
The management nodes and surviving data node detect that one data node has gone down, but no action is required because failover is automatic.
When the data node returns to service, it synchronizes its NDB data with the surviving node and resumes operation.
If both management nodes (m1 and m2) go down, the effect is:
No AAA service impact because the all s and d nodes are still available. The data nodes continue to update themselves.
If both data nodes go down, the effect on:
The management nodes is minimal. They detect that the data nodes are offline, but can only monitor them.
The SBR Carrier nodes varies:
Authentication of users and accounting for users that do not require shared resources such as the IP address pool or concurrency continue uninterrupted. If the nodes have local non-shared emergency IP address pools, the front ends can continue to process some requests.
Users that require shared resources are rejected.
The carrier nodes continue to operate this way until the data cluster comes back online; the cluster resumes normal AAA operation using the data cluster automatically.
If one half of the cluster (SBR Carrier Server 1, management node 1, and data node A or SBR Carrier Server 2, management node 2, and data node B) goes down, the effect is:
No AAA service impact because the SBR Carrier node, a management node, and a data node are all still in service. NADs using the failed SBR Carrier Server fail over SBR Carrier Server.
When the failed data node returns to service, it synchronizes and updates its NDB data with the surviving node and resumes operation.
When the failed SBR Carrier Server returns to service, the NADs assigned to use it as a primary resource return to service depending on the NAD implementation.
Distributed Cluster Failure and Recovery
Distributed Cluster Failure and Recovery
You can divide a cluster and separate two equal halves between two data centers. In this case, the interconnection is made by dedicated communications links (shown as bold lines in Figure 6 and Figure 7) that may be either:
A switched layer 2 network, just as the single site cluster is set up.
A routed layer 3 network that uses a routing table with backup routes to route over multiple links between data centers
However, this creates a configuration that is vulnerable to a catastrophic failure that severs the two halves of a dispersed cluster. We recommend adding a third management node at a location that has a separate alternative communication route to each half. A third management node:
Eliminates the possibility of the cluster being evenly split by a communications failure.
Creates an odd number of votes for elections, which greatly reduces the need for arbitration.
With a third management node in place, failover in the dispersed cluster is well managed because one side of the cluster does not have to determine what role to assume. Recovery is likely to be quicker when the data nodes are reunited because each node’s status is more likely to have been monitored by at least one management node that is in communication with each segment.

Without a third management node, the configuration shown in Figure 7 is vulnerable to data loss if both communication links are severed or if the nodes in the master half of the cluster all go offline simultaneously.

If either of those calamities occur, exactly half the nodes in a side survive. If the master nodes are operating on one or both sides, the cluster continues to function. But the secondary side cannot determine whether the master side is really no longer available because it only has two votes. It can take 10-15 minutes for the secondary side of the cluster to automatically restart, promote itself to master status, and resume cluster operations.
However, the SBR Carrier nodes that use that portion of the cluster do not automatically reconnect, and cannot communicate with the other half. If they were to reconnect to the secondary side, modifications made to the database create a divergence between the two copies of the database (although SBR Carrier nodes continue to process requests that do not require the database). The longer the cluster is split, the greater the divergence, and the longer it takes to resolve when recovery takes place.
To eliminate these problems, we recommend a proven alternative: adding a third management node in a third location that can communicate with each half of the dispersed cluster. Without the tertiary management node, there is a possibility of downtime in a dispersed cluster that suffers a catastrophic failure.
If you cannot add a third management node, we recommend that you configure the secondary side of the cluster not to automatically restart, but to go out of service when it instantaneously disconnects from the master side nodes. Then you can determine the best course of action—to keep the cluster offline or to promote the secondary side of the cluster, relink the SBR Carrier nodes, and be aware that reconciling the divergence must be part of the recovery procedure.
When the cluster is reunited and goes into recovery mode, the master and slave data nodes attempt to reconcile the divergence that occurred during separation. The moment they come in contact, transitory failures appear on the SBR Carrier nodes because the cluster configuration has changed; any transactions that are pending at that moment are aborted. The SBR Carrier nodes retry those transactions because they are classified as temporary failures; in most situations they are accepted on the first retry.