Configuring a vSRX Chassis Cluster in Junos OS

 

Chassis Cluster Overview

Chassis cluster groups a pair of the same kind of vSRX instances into a cluster to provide network node redundancy. The devices must be running the same Junos OS release. You connect the control virtual interfaces on the respective nodes to form a control plane that synchronizes the configuration and Junos OS kernel state. The control link (a virtual network or vSwitch) facilitates the redundancy of interfaces and services. Similarly, you connect the data plane on the respective nodes over the fabric virtual interfaces to form a unified data plane. The fabric link (a virtual network or vSwitch) allows for the management of cross-node flow processing and for the management of session redundancy.

The control plane software operates in active/passive mode. When configured as a chassis cluster, one node acts as the primary device and the other as the secondary device to ensure stateful failover of processes and services in the event of a system or hardware failure on the primary device. If the primary device fails, the secondary device takes over processing of control plane traffic.

Note

If you configure a chassis cluster on vSRX nodes across two physical hosts, disable igmp-snooping on the bridge that each host physical interface belongs to that the control vNICs use. This ensures that the control link heartbeat is received by both nodes in the chassis cluster.

The chassis cluster data plane operates in active/active mode. In a chassis cluster, the data plane updates session information as traffic traverses either device, and it transmits information between the nodes over the fabric link to guarantee that established sessions are not dropped when a failover occurs. In active/active mode, traffic can enter the cluster on one node and exit from the other node.

Chassis cluster functionality includes:

  • Resilient system architecture, with a single active control plane for the entire cluster and multiple Packet Forwarding Engines. This architecture presents a single device view of the cluster.

  • Synchronization of configuration and dynamic runtime states between nodes within a cluster.

  • Monitoring of physical interfaces, and failover if the failure parameters cross a configured threshold.

  • Support for generic routing encapsulation (GRE) and IP-over-IP (IP-IP) tunnels used to route encapsulated IPv4 or IPv6 traffic by means of two internal interfaces, gr-0/0/0 and ip-0/0/0, respectively. Junos OS creates these interfaces at system startup and uses these interfaces only for processing GRE and IP-IP tunnels.

At any given instant, a cluster node can be in one of the following states: hold, primary, secondary-hold, secondary, ineligible, or disabled. Multiple event types, such as interface monitoring, Services Processing Unit (SPU) monitoring, failures, and manual failovers, can trigger a state transition.

Prerequisites

Ensure that your vSRX instances comply with the following prerequisites before you enable chassis clustering:

  • Use show version in Junos OS to ensure that both vSRX instances have the same software version.

  • Use show system license in Junos OS to ensure that both vSRX instances have the same licenses installed.

Enabling Chassis Cluster Formation

You create two vSRX instances to form a chassis cluster, and then you set the cluster ID and node ID on each instance to join the cluster. When a vSRX VM joins a cluster, it becomes a node of that cluster. With the exception of unique node settings and management IP addresses, nodes in a cluster share the same configuration.

You can deploy up to 255 chassis clusters in a Layer 2 domain. Clusters and nodes are identified in the following ways:

  • The cluster ID (a number from 1 to 255) identifies the cluster.

  • The node ID (a number from 0 to 1) identifies the cluster node.

On SRX Series devices, the cluster ID and node ID are written into EEPROM. On the vSRX VM, vSRX stores and reads the IDs from boot/loader.conf and uses the IDs to initialize the chassis cluster during startup.

The chassis cluster formation commands for node 0 and node 1 are as follows:

  • On vSRX node 0:

  • On vSRX node 1:

    Note

    Use the same cluster ID number for each node in the cluster.

Note

The vSRX interface naming and mapping to vNICs changes when you enable chassis clustering.

After reboot, on node 0, configure the fabric (data) ports of the cluster that are used to pass real-time objects (RTOs):

Chassis Cluster Quick Setup with J-Web

To configure chassis cluster from J-Web:

  1. Enter the vSRX node 0 interface IP address in a Web browser.
  2. Enter the vSRX username and password, and click Log In. The J-Web dashboard appears.
  3. Click Configuration Wizards>Chassis Cluster from the left panel. The Chassis Cluster Setup wizard appears. Follow the steps in the setup wizard to configure the cluster ID and the two nodes in the cluster, and to verify connectivity.Note

    Use the built-in Help icon in J-Web for further details on the Chassis Cluster Setup wizard.

Manually Configuring a Chassis Cluster with J-Web

You can use the J-Web interface to configure the primary node 0 vSRX instance in the cluster. Once you have set the cluster and node IDs and rebooted each vSRX, the following configuration will automatically be synced to the secondary node 1 vSRX instance.

Select Configure>Chassis Cluster>Cluster Configuration. The Chassis Cluster configuration page appears.

Table 1 explains the contents of the HA Cluster Settings tab.

Table 2 explains how to edit the Node Settings tab.

Table 3 explains how to add or edit the HA Cluster Interfaces table.

Table 4 explains how to add or edit the HA Cluster Redundancy Groups table.

Table 1: Chassis Cluster Configuration Page

Field

Function

Node Settings

Node ID

Displays the node ID.

Cluster ID

Displays the cluster ID configured for the node.

Host Name

Displays the name of the node.

Backup Router

Displays the router used as a gateway while the Routing Engine is in secondary state for redundancy-group 0 in a chassis cluster.

Management Interface

Displays the management interface of the node.

IP Address

Displays the management IP address of the node.

Status

Displays the state of the redundancy group.

  • Primary–Redundancy group is active.

  • Secondary–Redundancy group is passive.

Chassis Cluster>HA Cluster Settings>Interfaces

Name

Displays the physical interface name.

Member Interfaces/IP Address

Displays the member interface name or IP address configured for an interface.

Redundancy Group

Displays the redundancy group.

Chassis Cluster>HA Cluster Settings>Redundancy Group

Group

Displays the redundancy group identification number.

Preempt

Displays the selected preempt option.

  • True–Mastership can be preempted based on priority.

  • False–Mastership cannot be preempted based on priority.

Gratuitous ARP Count

Displays the number of gratuitous Address Resolution Protocol (ARP) requests that a newly elected primary device in a chassis cluster sends out to announce its presence to the other network devices.

Node Priority

Displays the assigned priority for the redundancy group on that node. The eligible node with the highest priority is elected as primary for the redundant group.

Table 2: Edit Node Setting Configuration Details

Field

Function

Action

Node Settings

Host Name

Specifies the name of the host.

Enter the name of the host.

Backup Router

Displays the device used as a gateway while the Routing Engine is in the secondary state for redundancy-group 0 in a chassis cluster.

Enter the IP address of the backup router.

Destination

IP

Adds the destination address.

Click Add.

Delete

Deletes the destination address.

Click Delete.

Interface

Interface

Specifies the interfaces available for the router.

Note: Allows you to add and edit two interfaces for each fabric link.

Select an option.

IP

Specifies the interface IP address.

Enter the interface IP address.

Add

Adds the interface.

Click Add.

Delete

Deletes the interface.

Click Delete.

Table 3: Add HA Cluster Interface Configuration Details

Field

Function

Action

Fabric Link > Fabric Link 0 (fab0)

Interface

Specifies fabric link 0.

Enter the interface IP fabric link 0.

Add

Adds fabric interface 0.

Click Add.

Delete

Deletes fabric interface 0.

Click Delete.

Fabric Link > Fabric Link 1 (fab1)

Interface

Specifies fabric link 1.

Enter the interface IP for fabric link 1.

Add

Adds fabric interface 1.

Click Add.

Delete

Deletes fabric interface 1.

Click Delete.

Redundant Ethernet

Interface

Specifies a logical interface consisting of two physical Ethernet interfaces, one on each chassis.

Enter the logical interface.

IP

Specifies a redundant Ethernet IP address.

Enter a redundant Ethernet IP address.

Redundancy Group

Specifies the redundancy group ID number in the chassis cluster.

Select a redundancy group from the list.

Add

Adds a redundant Ethernet IP address.

Click Add.

Delete

Deletes a redundant Ethernet IP address.

Click Delete.

Table 4: Add Redundancy Groups Configuration Details

Field

Function

Action

Redundancy Group

Specifies the redundancy group name.

Enter the redundancy group name.

Allow preemption of primaryship

Allows a node with a better priority to initiate a failover for a redundancy group.

Note: By default, this feature is disabled. When disabled, a node with a better priority does not initiate a redundancy group failover (unless some other factor, such as faulty network connectivity identified for monitored interfaces, causes a failover).

Gratuitous ARP Count

Specifies the number of gratuitous Address Resolution Protocol requests that a newly elected primary sends out on the active redundant Ethernet interface child links to notify network devices of a change in mastership on the redundant Ethernet interface links.

Enter a value from 1 to 16. The default is 4.

node0 priority

Specifies the priority value of node0 for a redundancy group.

Enter the node priority number as 0.

node1 priority

Specifies the priority value of node1 for a redundancy group.

Select the node priority number as 1.

Interface Monitor

  

Interface

Specifies the number of redundant Ethernet interfaces to be created for the cluster.

Select an interface from the list.

Weight

Specifies the weight for the interface to be monitored.

Enter a value from 1 to 125.

Add

Adds interfaces to be monitored by the redundancy group along with their respective weights.

Click Add.

Delete

Deletes interfaces to be monitored by the redundancy group along with their respective weights.

Select the interface from the configured list and click Delete.

IP Monitoring

Weight

Specifies the global weight for IP monitoring.

Enter a value from 0 to 255.

Threshold

Specifies the global threshold for IP monitoring.

Enter a value from 0 to 255.

Retry Count

Specifies the number of retries needed to declare reachability failure.

Enter a value from 5 to 15.

Retry Interval

Specifies the time interval in seconds between retries.

Enter a value from 1 to 30.

IPV4 Addresses to Be Monitored

IP

Specifies the IPv4 addresses to be monitored for reachability.

Enter the IPv4 addresses.

Weight

Specifies the weight for the redundancy group interface to be monitored.

Enter the weight.

Interface

Specifies the logical interface through which to monitor this IP address.

Enter the logical interface address.

Secondary IP address

Specifies the source address for monitoring packets on a secondary link.

Enter the secondary IP address.

Add

Adds the IPv4 address to be monitored.

Click Add.

Delete

Deletes the IPv4 address to be monitored.

Select the IPv4 address from the list and click Delete.