Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Configuring a vSRX Chassis Cluster in Junos OS

 

Chassis Cluster Overview

Chassis cluster groups a pair of the same kind of vSRX instances into a cluster to provide network node redundancy. The vSRX instances in a chassis cluster must be running the same Junos OS release, and each instance becomes a node in the chassis cluster. You connect the control virtual interfaces on the respective nodes to form a control plane that synchronizes the configuration and Junos OS kernel state on both nodes in the cluster. The control link (a virtual network or vSwitch) facilitates the redundancy of interfaces and services. Similarly, you connect the data plane on the respective nodes over the fabric virtual interfaces to form a unified data plane. The fabric link (a virtual network or vSwitch) allows for the management of cross-node flow processing and for the management of session redundancy.

The control plane software operates in active/passive mode. When configured as a chassis cluster, one node acts as the primary and the other as the secondary to ensure stateful failover of processes and services in the event of a system or hardware failure on the primary . If the primary fails, the secondary takes over processing of control plane traffic.

Note

If you configure a chassis cluster across two hosts, disable igmp-snooping on the bridge that each host physical interface belongs to and that the control virtual NICs (vNICs) use. This ensures that the control link heartbeat is received by both nodes in the chassis cluster.

The chassis cluster data plane operates in active/active mode. In a chassis cluster, the data plane updates session information as traffic traverses either node, and it transmits information between the nodes over the fabric link to guarantee that established sessions are not dropped when a failover occurs. In active/active mode, traffic can enter the cluster on one node and exit from the other node.

Chassis cluster functionality includes:

  • Resilient system architecture, with a single active control plane for the entire cluster and multiple Packet Forwarding Engines. This architecture presents a single device view of the cluster.

  • Synchronization of configuration and dynamic runtime states between nodes within a cluster.

  • Monitoring of physical interfaces, and failover if the failure parameters cross a configured threshold.

  • Support for generic routing encapsulation (GRE) and IP-over-IP (IP-IP) tunnels used to route encapsulated IPv4 or IPv6 traffic by means of two internal interfaces, gr-0/0/0 and ip-0/0/0, respectively. Junos OS creates these interfaces at system startup and uses these interfaces only for processing GRE and IP-IP tunnels.

At any given instant, a cluster node can be in one of the following states: hold, primary, secondary-hold, secondary, ineligible, or disabled. Multiple event types, such as interface monitoring, Services Processing Unit (SPU) monitoring, failures, and manual failovers, can trigger a state transition.

Enabling Chassis Cluster Formation

You create two vSRX instances to form a chassis cluster, and then you set the cluster ID and node ID on each instance to join the cluster. When a vSRX instance joins a cluster, it becomes a node of that cluster. With the exception of unique node settings and management IP addresses, nodes in a cluster share the same configuration.

You can deploy up to 255 chassis clusters in a Layer 2 domain. Clusters and nodes are identified in the following ways:

  • The cluster ID (a number from 1 to 255) identifies the cluster.

  • The node ID (a number from 0 to 1) identifies the cluster node.

Generally, on SRX Series devices, the cluster ID and node ID are written into EEPROM. On the vSRX instance, vSRX stores and reads the IDs from boot/loader.conf and uses the IDs to initialize the chassis cluster during startup.

Prerequisites

Ensure that your vSRX instances comply with the following prerequisites before you enable chassis clustering:

  • You have committed a basic configuration to both vSRX instances that form the chassis cluster. See Configuring vSRX Using the CLI.

  • Use show version in Junos OS to ensure that both vSRX instances have the same software version.

  • Use show system license in Junos OS to ensure that both vSRX instances have the same licenses installed.

You must set the same chassis cluster ID on each vSRX node and reboot the vSRX VM to enable chassis cluster formation.

  1. In operational command mode, set the chassis cluster ID and node number on vSRX node 0.
  2. In operational command mode, set the chassis cluster ID and node number on vSRX node 1.
Note

The vSRX interface naming and mapping to vNICs changes when you enable chassis clustering. See Requirements for vSRX on KVM for a summary of interface names and mappings for a pair of vSRX VMs in a cluster (node 0 and node 1).

Chassis Cluster Quick Setup with J-Web

To configure chassis cluster from J-Web:

  1. Enter the vSRX node 0 interface IP address in a Web browser.
  2. Enter the vSRX username and password, and click Log In. The J-Web dashboard appears.
  3. Click Configuration Wizards> Cluster (HA) Setup from the left panel. The Chassis Cluster Setup Wizard appears. Follow the steps in the setup wizard to configure the cluster ID and the two nodes in the cluster, and to verify connectivity.Note

    Use the built-in Help icon in J-Web for further details on the Chassis Cluster Setup wizard.

    Note

    Navigate to Configure>Device Settings>Cluster (HA) Setup from Junos OS release 18.1 and later to configure the chassis cluster setup.

  4. Configure the secondary node Node1 by selecting Yes, this is the secondary unit to be setup (Node 1) using radio button.
  5. Click Next.
  6. Specify the settings such as Enter password, Re-enter password, Node 0 FXP0 IP, and Node 1 FXP0 IP for secondary node access.
  7. Click Next.
  8. Select the secondary unit’s Control Port and Fabric Port.
  9. Click Next.
  10. (Optional) Select Save a backup file before proceeding with shutdown using check box to re-configure it for chassis cluster.
  11. Click Next.
  12. Click Shutdown and continue to connect to other unit.
  13. Click Refresh Browser.
  14. Configure the primary node Node0 by selecting No, this is the primary unit to be setup (Node 0) to configure primary unit and establish a chassis cluster configuration.
  15. Click Next.
  16. Specify the settings such as Enter password, Re-enter password, Node 0 FXP0 IP, and Node 1 FXP0 IP for primary node access.
  17. Click Next to restart the primary unit.
  18. (Optional) Select Save a backup file before proceeding with shutdown to save a backup file of current settings before proceeding.
  19. Click Reboot and continue. After completing the reboot, power on the secondary unit to establish the chassis cluster connection.
  20. Login to the device console and add static route to get the J-Web access.
  21. Login to the J-Web and click Configuration Wizards> Cluster (HA) Setup from the left panel. The Chassis Cluster Setup Wizard appears.
  22. Click Next to get the primary unit connected.
  23. Configure the basic settings DHCP Client, IP address, Default gateway, Member interface Node 0, Member interface Node 1.
  24. Click Next to complete the chassis cluster configuration.
  25. Click Finish to exit the wizard. You can access the primary node using J-Web.

Manually Configuring a Chassis Cluster with J-Web

You can use the J-Web interface to configure the primary node 0 vSRX instance in the cluster. Once you have set the cluster and node IDs and rebooted each vSRX, the following configuration will automatically be synced to the secondary node 1 vSRX instance.

Select Configure>Chassis Cluster>Cluster Configuration. The Chassis Cluster configuration page appears.

Note

Navigate to Configure>Device Settings>Cluster (HA) Setup from Junos OS release 18.1 and later to configure the HA cluster setup.

Table 1 explains the contents of the HA Cluster Settings tab.

Table 2 explains how to edit the Node Settings tab.

Table 3 explains how to add or edit the HA Cluster Interfaces table.

Table 4 explains how to add or edit the HA Cluster Redundancy Groups table.

Table 1: Chassis Cluster Configuration Page

Field

Function

Node Settings

Node ID

Displays the node ID.

Cluster ID

Displays the cluster ID configured for the node.

Host Name

Displays the name of the node.

Backup Router

Displays the router used as a gateway while the Routing Engine is in secondary state for redundancy-group 0 in a chassis cluster.

Management Interface

Displays the management interface of the node.

IP Address

Displays the management IP address of the node.

Status

Displays the state of the redundancy group.

  • Primary–Redundancy group is active.

  • Secondary–Redundancy group is passive.

Chassis Cluster>HA Cluster Settings>Interfaces

Name

Displays the physical interface name.

Member Interfaces/IP Address

Displays the member interface name or IP address configured for an interface.

Redundancy Group

Displays the redundancy group.

Chassis Cluster>HA Cluster Settings>Redundancy Group

Group

Displays the redundancy group identification number.

Preempt

Displays the selected preempt option.

  • True–Mastership can be preempted based on priority.

  • False–Mastership cannot be preempted based on priority.

Gratuitous ARP Count

Displays the number of gratuitous Address Resolution Protocol (ARP) requests that a newly elected primary device in a chassis cluster sends out to announce its presence to the other network devices.

Node Priority

Displays the assigned priority for the redundancy group on that node. The eligible node with the highest priority is elected as primary for the redundant group.

Table 2: Edit Node Setting Configuration Details

Field

Function

Action

Node Settings

Host Name

Specifies the name of the host.

Enter the name of the host.

Backup Router

Displays the device used as a gateway while the Routing Engine is in the secondary state for redundancy-group 0 in a chassis cluster.

Enter the IP address of the backup router.

Destination

IP

Adds the destination address.

Click Add.

Delete

Deletes the destination address.

Click Delete.

Interface

Interface

Specifies the interfaces available for the router.

Note: Allows you to add and edit two interfaces for each fabric link.

Select an option.

IP

Specifies the interface IP address.

Enter the interface IP address.

Add

Adds the interface.

Click Add.

Delete

Deletes the interface.

Click Delete.

Table 3: Add HA Cluster Interface Configuration Details

Field

Function

Action

Fabric Link > Fabric Link 0 (fab0)

Interface

Specifies fabric link 0.

Enter the interface IP fabric link 0.

Add

Adds fabric interface 0.

Click Add.

Delete

Deletes fabric interface 0.

Click Delete.

Fabric Link > Fabric Link 1 (fab1)

Interface

Specifies fabric link 1.

Enter the interface IP for fabric link 1.

Add

Adds fabric interface 1.

Click Add.

Delete

Deletes fabric interface 1.

Click Delete.

Redundant Ethernet

Interface

Specifies a logical interface consisting of two physical Ethernet interfaces, one on each chassis.

Enter the logical interface.

IP

Specifies a redundant Ethernet IP address.

Enter a redundant Ethernet IP address.

Redundancy Group

Specifies the redundancy group ID number in the chassis cluster.

Select a redundancy group from the list.

Add

Adds a redundant Ethernet IP address.

Click Add.

Delete

Deletes a redundant Ethernet IP address.

Click Delete.

Table 4: Add Redundancy Groups Configuration Details

Field

Function

Action

Redundancy Group

Specifies the redundancy group name.

Enter the redundancy group name.

Allow preemption of primaryship

Allows a node with a better priority to initiate a failover for a redundancy group.

Note: By default, this feature is disabled. When disabled, a node with a better priority does not initiate a redundancy group failover (unless some other factor, such as faulty network connectivity identified for monitored interfaces, causes a failover).

Gratuitous ARP Count

Specifies the number of gratuitous Address Resolution Protocol requests that a newly elected primary sends out on the active redundant Ethernet interface child links to notify network devices of a change in mastership on the redundant Ethernet interface links.

Enter a value from 1 to 16. The default is 4.

node0 priority

Specifies the priority value of node0 for a redundancy group.

Enter the node priority number as 0.

node1 priority

Specifies the priority value of node1 for a redundancy group.

Select the node priority number as 1.

Interface Monitor

  

Interface

Specifies the number of redundant Ethernet interfaces to be created for the cluster.

Select an interface from the list.

Weight

Specifies the weight for the interface to be monitored.

Enter a value from 1 to 125.

Add

Adds interfaces to be monitored by the redundancy group along with their respective weights.

Click Add.

Delete

Deletes interfaces to be monitored by the redundancy group along with their respective weights.

Select the interface from the configured list and click Delete.

IP Monitoring

Weight

Specifies the global weight for IP monitoring.

Enter a value from 0 to 255.

Threshold

Specifies the global threshold for IP monitoring.

Enter a value from 0 to 255.

Retry Count

Specifies the number of retries needed to declare reachability failure.

Enter a value from 5 to 15.

Retry Interval

Specifies the time interval in seconds between retries.

Enter a value from 1 to 30.

IPV4 Addresses to Be Monitored

IP

Specifies the IPv4 addresses to be monitored for reachability.

Enter the IPv4 addresses.

Weight

Specifies the weight for the redundancy group interface to be monitored.

Enter the weight.

Interface

Specifies the logical interface through which to monitor this IP address.

Enter the logical interface address.

Secondary IP address

Specifies the source address for monitoring packets on a secondary link.

Enter the secondary IP address.

Add

Adds the IPv4 address to be monitored.

Click Add.

Delete

Deletes the IPv4 address to be monitored.

Select the IPv4 address from the list and click Delete.

Related Documentation