Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

NDB Cluster Timeout Dependencies

 

This section describes timeout dependencies between various SSR data nodes. A certain ratio must exist between each OSI stack-level failure condition, specifically between NAS clients and the RADIUS front-end applications, the RADIUS S node and the D nodes, and the ndb and dbapi nodes (M nodes and D nodes). This dependency is related to timeout values associated within the network and the NDB itself.

The network between the S nodes and the D nodes has the following timeout dependencies.

  • If you are using IPMP, the IPMP probe value must be lower than twice the heartbeat timeout appropriate for the connection. Widely divergent values may impact performance in a failure case, leading to unexpected outage.

    Note

    Default values for the S or M nodes and the D nodes are controlled by the /opt/JNPRhadm/config.ini file on the M nodes. This value is set by HeartBeatIntervalDbApi and is 1500 milliseconds by default. The inter-D node timeout is set by HeartBeatIntervalueDbDb and is 200 milliseconds by default.

  • Heartbeats are implemented in and among the D nodes so that failures are more quickly detected than the underlying TCP failure mechanism can detect. The initial detection of fault happens after the interval of 4 x HeartBeatInterval. After initial detection, the D nodes attempt to repartition and form a valid cluster. This operation can take several to many seconds, depending on the type and mode of failure. Single D node hard failures or hard networking loss are quickest to detect. Complete cluster splits and serious network faults take longer to detect and compensate.

    System overload affects fault-recovery performance. Many outstanding transactions take longer to roll back than a few outstanding transactions.

  • During an extended loss of service due to a significant failure, such as loss of connectivity between two halves of a cluster, SBR Carrier might need to reconnect to the new cluster to continue processing. The failures of reconnection are managed by timers set by the [Ndb] values— DelayBetweenConnectRetriesSec and ReconnectRetriesin—in the dbclusterndb.gen file. Setting these values higher than the defaults can make the system more resilient at the expense of a period of dropped RADIUS traffic. Setting the values of TimeoutForFirstAliveSec and TimeoutAFterFirstALiveSec lower may also increase resiliency.

  • Some NDB operations are designed to retry the network to avoid lock contention. In cases where the underlying network is prone to latency or dropped packets, increasing the values of Retries and DelayBetwenRetriesMillisec in the [Database] section can improve performance and decrease delays.

  • In cases where the underlying network is prone to short or long periods of latency, fault, or other unexpected cases, setting the value of HeartBeatInterval higher and setting all the proportionally related values appropriately can make the system more resilient. The trade-off is a fast detection of serious failures against the acceptance of temporary processing delays. The delays are due to minor faults that are otherwise survivable.

  • In cases of NDB cluster failures due to extended one-way traffic failure between the inter-D and SM-D network, you need to automatically restart a node. A correct network design should not permit this failure to happen. IPMP probes with the correct values, for instance, cause this to fail over to a working link. The HeartBeatOrder fixes temporary instances of this type of failure. See the SBR Carrier Reference Guide for more information about the HeartBeatOrder parameter.

  • Certain failure conditions may require you to perform a manual restart. These failure conditions are usually associated with serious, extended, and pathological network dysfunctions.

  • The default settings of CacheLowWater, CacheHighWater, and CacheChunkSize may cause badly degraded performance. The defaults cannot be made higher because one S node can pre-cache all the addresses in a small pool if the CacheLowWater is set higher than the number of addresses in a pool. Default values of CacheLowWater and CacheChunkSize are related to the transaction rate of new address allocations for the installation. Hence, it is impossible to run out of addresses before the threads can fill up the cache, and you can use Per-Pool settings to set any small pools much lower than the default.

    Set CacheThreadVerbose=1 and inspect the logs for Emergency allocations. If CacheLowWater and CacheChunkSize are too low, then performance is degraded. Another indicator of degraded performance is low CPU utilization on the front-end applications and high CPU utilization on NDB.

  • The ConnectCheckIntervalDelay parameter comes into play when a heartbeat timeout occurs. At this time, the node will begin a connectivity check, pinging all nodes 3 times with the configured interval. The responses indicate whether the other nodes are trustworthy (that is, respond 100%), untrustworthy, or down (no response). Only messages from those nodes that are trustworthy will be considered during the regular election cycle. This feature helps protect against intermittent latency spikes.