Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Monitoring Nodes in a Chassis Cluster

 

To monitor the cluster, you need to discover the redundancy groups. When you initialize a device in chassis cluster mode, the system creates a redundancy group referred to in this topic as redundancy group 0. Redundancy group 0 manages the primacy and failover between the Routing Engines on each node of the cluster. As is the case for all redundancy groups, redundancy group 0 can be primary on only one node at a time. The node on which redundancy group 0 is primary determines which Routing Engine is active in the cluster. A node is considered the primary node of the cluster if its Routing Engine is the active one. You can configure one or more redundancy groups numbered 1 through 128, referred to in this section as redundancy group x. The maximum number of redundancy groups is equal to the number of redundant Ethernet interfaces +1 that you configure. Each redundancy group x acts as an independent unit of failover and is primary on only one node at a time. There are no MIBS available to retrieve this information.

Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol

Use the get-configuration remote procedure call (RPC) to get the redundancy configuration and the redundancy groups present on the device. This provides the redundancy groups configured.

XML RPC for Configuration Retrieval

Response:

Chassis Cluster Redundant Ethernet Interfaces

A redundant Ethernet interface is a pseudointerface that includes at minimum one physical interface from each node of the cluster. A redundant Ethernet interface is referred to as a reth in configuration commands. The following sample output shows two redundancy groups present and configured.

Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol

  • Use the get-chassis-cluster-interfaces remote procedure call (RPC) to obtain the reth interface details. The following sample output shows four reth interfaces configured:

    XML RPC for Chassis Cluster Interfaces

    user@host> show chassis cluster interfaces |display xml
    user@host> show chassis cluster interfaces
  • Use the get-interface-information remote procedure call (RPC) to show reth interface details and to identify the reth interfaces on the device. This RPC also shows which Gigabit Ethernet or Fast Ethernet interfaces belong to which reth interface as shown in the following sample output:

    XML RPC for Interface Information

    In the sample output, the ae-bundle-name tag identifies the reth interface it belongs to.

Using SNMP

  • The ifTable MIB table reports all the reth interfaces.

  • Use the ifStackStatus MIB table to map the reth interface to the underlying interfaces on the primary and secondary nodes. The reth interface is the high layer, and the individual interfaces from both nodes show up as lower layer indexes.

    Sample SNMP Data for the Reth Interface Details

    In the following sample, ge-5/1/1 and ge-11/1/1 belong to reth0:

    {primary:node0}
    user@host> show interfaces terse | grep reth0

    Find the index of all interfaces from the ifTable. The following information shows indexes of interfaces required in this example:

    {primary:node0}
    user@host> show snmp mib walk ifDescr | grep reth0

    Now, search for the index for reth0 in the ifStackStatus table. In the following sample output, reth0 index 503 is the higher layer index, and index 522 and 552 are the lower layer indexes. Index 522 and 552 represent interfaces ge-5/1/1.0 and ge-11/1/1.0, respectively.

    {primary:node0}
    user@host> show snmp mib walk ifStackStatus | grep 503
    {primary:node0}
    user@host> show snmp mib walk ifDescr | grep 522
    {primary:node0}
    user@host> show snmp mib walk ifDescr | grep 552

Using the Control Plane

The control plane software, which operates in active/backup mode, is an integral part of Junos OS that is active on the primary node of a cluster. It achieves redundancy by communicating state, configuration, and other information to the inactive Routing Engine on the secondary node. If the primary Routing Engine fails, the secondary one is ready to assume control. The following methods can be used to discover control port information.

Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol

Use the get-configuration remote procedure call (RPC) to get the control port configuration as shown in the following sample output.

XML RPC for Redundant Group Configuration

Using the Data Plane

The data plane software, which operates in active/active mode, manages flow processing and session state redundancy and processes transit traffic. All packets belonging to a particular session are processed on the same node to ensure that the same security treatment is applied to them. The system identifies the node on which a session is active and forwards its packets to that node for processing. The data link is referred to as the fabric interface. It is used by the cluster's Packet Forwarding Engines to transmit transit traffic and to synchronize the data plane software’s dynamic runtime state. When the system creates the fabric interface, the software assigns it an internally derived IP address to be used for packet transmission. The fabric is a physical connection between two nodes of a cluster and is formed by connecting a pair of Ethernet interfaces back-to-back (one from each node). The following methods can be used to determine the data plane interfaces.

Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol

Use the get-chassis-cluster-data-plane-interfaces remote procedure call (RPC) to get the data plane interfaces as shown in the following sample output.

XML RPC for Cluster Dataplane Interface Details

Using SNMP

The ifTable MIB table reports fabric (fab) interfaces and the link interfaces. However, the relationship between the underlying interfaces and fabric interfaces cannot be determined using SNMP.

Provisioning Chassis Cluster Nodes

Use the NETCONF XML management protocol for configuration and provisioning of SRX Series devices and Junos OS devices in general. We recommend using groups to configure SRX Series chassis clusters. Use global groups for all configurations that are common between the nodes.

Junos OS commit scripts can be used to customize the configuration as desired.

Junos OS commit scripts are:

  • Run at commit time

  • Inspect the incoming configuration

  • Perform actions including:

    • Failing the commit (self-defense)

    • Modifying the configuration (self-correcting)

Commit scripts can:

  • Generate custom error/warning/syslog messages

  • Make changes or corrections to the configuration

Commit scripts give you better control over how your devices are configured to enforce:

  • Your design rules

  • Your implementation details

  • 100 percent of your design standards