Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Multinode High Availability Support for vSRX Virtual Firewall Instances

Multinode High Availability addresses high availability requirements for private and public cloud deployments by offering interchassis resiliency.

We support Multinode High Availability on Juniper Networks vSRX Virtual Firewall Virtual Firewalls for the private (Kernel-based virtual machine [KVM] and VMware ESXi) and public cloud (AWS) deployments.

You can configure Multinode High Availability on vSRX instances by using the same method as for physical SRX Series firewalls for private cloud deployments.

To configure Multinode High Availability in VMware ESXi, and KVM:

To configure Multinode High Availability in public cloud deployments:

ICL Encryption and Flexible Datapath Failure Detection Support

The vSRX Virtual Firewall in Multinode High Availability deployed in private clouds (KVM and VMware ESXi) supports ICL Encryption and Flexible Datapath Failure Detection.

  • ICL Encryption uses IPsec protocols to secure synchronization messages between high-availability nodes, ensuring data privacy. See Example: Configure Multinode High Availability in a Layer 3 Network for configuration details.
  • Flexible Datapath Failure Detection offers path monitoring with granular control through weighted features, supporting IP, Bidirectional Forwarding Detection (BFD), and interface monitoring.

    SeeFlexible Path Monitoring for more details.

Understanding Multinode High Availability Dual Path Interchassis Link (ICL)

Multinode High Availability (MNHA) supports dual path interchassis link (ICL) over aggregated Ethernet (AE) and loopback interfaces. This enhancement enables efficient traffic distribution and improved HA reliability across public clouds such as AWS, Azure, and Google Cloud Platform (GCP), as well as private clouds using KVM and VMware. For configuring ICL, note that:

  • In public cloud settings such as AWS, Azure, and GCP, loopback interfaces are preferred due to constraints on AE interfaces.
  • In private cloud environments utilizing KVM and VMware, you can use AE interfaces for establishing dual path ICLs, characterized by flexible configurations that allow you to use various network interface cards.

Benefits of Dual Path ICL in MNHA

  • Enhances compatibility with public cloud environments like AWS, Azure, and GCP, enabling efficient Layer 3 high availability deployment.

  • Improves traffic distribution and load balancing through the use of AE and loopback interfaces, ensuring optimal performance in both public and private cloud setups.

    Support for AE Interface as Dual-Path ICL in Private Cloud Environments

    In a private cloud setup using KVM or VMware, you can configure an Aggregated Ethernet (AE) interface as a dual-path ICL. This is supported with:

    • Virtio NICs for KVM
    • VMXNET3 NICs for VMware

    For setups using SR-IOV NICs (such as Intel I40E, E810, or Mellanox MLX5), the AE interface must support virtual MAC (vMAC) functionality.

    Each member (child) interface of the AE must be a Gigabit Ethernet (GE) interface and must reside on different physical functions (PFs)—typically meaning they should be on different line cards.

Efficient Distribution of ICL Traffic

Managing Traffic in vSRX with Aggregated Ethernet (AE): Using LACP and Hypervisor Bridge Configuration

In a vSRX setup, ensuring reliable and efficient traffic flow is crucial, especially when dealing with aggregated Ethernet (AE) interfaces. If one child interface within the AE group becomes out-of-order or fails, it can lead to traffic loss. To mitigate this risk, you can employ Link Aggregation Control Protocol (LACP) or configure the child interfaces to share the same bridge in the hypervisor.

Using LACP on AE Interface

LACP is a protocol that helps dynamically manage the link aggregation, ensuring that all child interfaces are working in coordination. By configuring LACP, you can automatically handle interface failures and maintain traffic flow without manual intervention. Here is how you can configure LACP on an AE interface:

  • Active: This setting enables the active LACP mode, where the vSRX actively sends LACP packets to the peer to form and maintain the LACP link.
  • Periodic Fast: This setting reduces the LACP timeout interval, allowing quicker detection of link failures and faster response to maintain traffic flow.

Configuring Child Interfaces on the Same Bridge

In situations where LACP is hard to support or implement, another approach is to ensure that all child interfaces in the AE group are connected to the same bridge within the hypervisor. This configuration can help maintain consistent connectivity and avoid traffic loss when one interface is out-of-order.

Below is an example of XML configuration for two child interfaces (ge-0/0/0 and ge-0/0/3) using the same bridge (bridge-vsrx-mnha-icl) in a KVM hypervisor:

In this configuration: Both interfaces are connected to the bridge-vsrx-mnha-icl bridge, ensuring they share the same network segment within the hypervisor. This setup can help manage traffic effectively even if one of the interfaces experiences issues.

Using AE Interfaces in MNHA Confihuration

The following configuration snippet sets up a MNHA environment using AE interface:

The above sample shows configuring local and peer chassis IDs and IPs for HA communication, associating physical interfaces with an aggregated Ethernet interface (ae0) using LACP, and assigning an IP address to the logical unit of the aggregated interface. In this example:

  • The AE interface (ae0) is used as ICL between the nodes.
  • Physical interfaces ge-0/0/0 and ge-0/0/1 are bundled into ae0 using 802.3ad link aggregation.

Balanced ICL Traffic Distribution Across PFEs

We have improved the system to ensure balanced ICL traffic distribution across all PFE processing units on the receiving side in an MNHA setup.

In our MNHA setup, traffic across ICL is efficiently managed to ensure high performance. Outgoing traffic is automatically distributed across multiple flow processing units, a built-in feature of the MNHA architecture. To improve incoming traffic handling, we use a five-tuple hashing method on the ICL port. This evenly spreads traffic across all Packet Forwarding Engine (PFE) processing units, resulting in better load balancing and overall network efficiency.

The command set chassis high-availability peer-id <id_num> interface <interface-name> supports GE, AE, and loopback interfaces, enabling high-availability configurations tailored to your specific deployment needs.

On the sending side, the determination of which port is used for ICL traffic in the Packet Forwarding Engine does not rely solely on this configuration. Instead, it depends on IP and route lookup results. For SRX Series Firewalls, priority queues are enabled by default on all ports help manage traffic, especially for aggregate interfaces and loopback interfaces . However, vSRX3.0 cannot utilize this approach.

For the receiving side distribution in vSRX3.0, a new CLI configuration is required to specify ICL ports for enabling 5-tuple hashing. This is done with the command:

user@host# set security forwarding-options receive-side-scaling nic-rss hash five-tuple ports [port_ids]

Example:

In this configuration, ports 0 and port 1 are configured to use five-tuple hashing for receive side scaling. This ensures that incoming traffic is efficiently distributed based on source and destination IP addresses, ports, and protocol.