Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Third-Party High Availability Support and Limitations

    The following sections describe IDP Series support for high availability deployments:

    Third-Party High Availability Overview

    IDP OS Release 5.1 supports high availability in network designs where you have deployed redundant network paths and use the failure detection features of a firewall, router, or switch to manage the cutover from the primary path to the backup path in cases of failure. In these deployments, you implement:

    • State synchronization between the primary and the standby IDP Series devices.
    • Link state signaling. The IDP Series device must signal failure so that it can be predictably identified by the third-party failure detection mechanism.

    The following sections provide details:

    State Synchronization

    You establish state synchronization between the primary and the standby IDP Series device by connecting the IDP Series HA interfaces (eth1) with a crossover cable. In addition, you must use the Appliance Configuration Manager (ACM) to enable the Third-Party HA setting. You can use the sctop command-line utility to monitor state and flow synchronization.

    Link State Signaling

    You enable an IDP Series link state signaling mechanism so that it responds as expected to the third-party device link checking mechanism. You have the following choices:

    • Layer 2 bypass for Bridge Protocol Data Unit (BPDU) packets. In deployments that use spanning tree protocol (STP), the IDP Series device must be able to pass BPDU packets. Use ACM to enable Layer 2 bypass so that BPDU packets are passed through and not dropped. When the IDP engine is in a healthy state, it passes through the BPDU packets. When the IDP engine is shut down or in a failed state, it cannot pass BPDU packets. An STP switch deployment detects this and chooses an alternate path.
    • Interface signaling. In deployments that use other link status detection methods, you can enable the interface signaling setting (setting ha_interface_signal = 1) so that all IDP Series interfaces and brought down when there is a problem with the device. The interface signaling script monitors the state of traffic interfaces (eth2, eth3, and so on) and IDP engines (idpengine0, idpengine1, and so on). In case of interface failure, the script brings down all peer interfaces so that the third-party link detection mechanism can properly detect failure. In case of IDP engine failure, the autorecovery feature attempts to restart the IDP engine. If the IDP engine cannot be restarted after six attempts, the auto-recovery process runs an idp.sh stop command. The interface signaling script then brings down all traffic interfaces.

      After bringing down the traffic interfaces, the interface signaling script sleeps for 30 seconds to avoid link flapping issues. After 30 seconds, the script checks the state of the interface that had encountered the failure or the state of the IDP engine. When the underlying problem has been resolved, the interface signaling script brings up the peer interfaces.

      Even when the interface signaling setting is disabled (setting ha_interface_signal = 0), the HA feature monitors the status of IDP engines. If an IDP engine fails, any remaining IDP engines are signaled to disregard the Layer 2 bypass setting and drop Layer 2 traffic, including BPDUs.

    • Peer port modulation (PPM). If your IDP Series deployment uses only one pair of traffic interfaces, you can use PPM to monitor and propagate link state for the interface pair. In contrast to interface signaling, which propagates a failed state to all traffic interfaces, the PPM daemon propagates link state only to the paired interface on the other side (for example, eth2 to eth3). If you enable both PPM and interface signaling, the PPM daemon is shut down to avoid conflicts.

    Note: Due to a hardware limitation with 10 gigabyte fiber interface modules, interface signaling and PPM are not supported for the IDP8200 10 gigabyte fiber I/O module.

    Third-Party High Availability Requirements

    Table 1 summarizes deployment component requirements. We support deployment of active-passive, failover pairs. We do not support active-active deployments.

    Table 1: Third-Party HA Requirements

    Component

    Requirement

    IDP Series devices

    Hardware – same model.

    Software – same version.

    Same configuration and same security policy.

    Autorecovery enabled (default). HA can function if auto-recovery is disabled, but we recommend you leave it enabled so that easily recoverable conditions do not result in unnecessary failover operations.

    Traffic interfaces. Virtual routers (interface pairs) must be set to transparent mode. We have not tested and do not support HA state sync when virtual routers are configured in sniffer mode or when the device is deployed in mixed mode.

    You must enable one virtual router named vr0. When you enable HA with ACM, the HA interface (eth1), gets added to vr0. The eth1 interface is not involved in traffic forwarding. It must belong to vr0 as a system requirement.

    Note: The HA feature monitors interface status, so unplugging and plugging in interface cables is significant. Use the CLI hasignal.sh restart command to reinitialize HA interface monitoring any time you plug in or unplug a traffic interface.

    Simulation mode. Simulation mode is not a deployment mode, rather it is an operational mode. The simulation mode setting does not preclude your ability to enable HA or deploy the devices as an HA active-passive cluster. Note, however, that a device deployed in simulation mode is not likely to encounter failure.

    Layer 2 bypass enabled.

    NIC bypass set to Nics off. This setting is enforced by ACM. If you enable HA, you cannot enable NIC bypass.

    HA interface

    The eth1 interfaces must be connected directly with a cross-over cable (so must be physically close).

    Third-party HA mechanism

    • Juniper Networks ScreenOS firewalls, running NetScreen Redundancy Protocol (NSRP)*
    • Juniper Networks EX Series switches, running a spanning tree protocol: STP, MSTP, RTSP, or VSTP**
    • Other vendors’ firewalls, running Virtual Router Redundancy Protocol (VRRP)
    • Other vendors’ switches, running STP***
    • Routers running Hot Standby Redundancy Protocol (HSRP)

    _________
    * IDP OS 5.1 was tested with Juniper Networks ISG1000 running ScreenOS version 5.4.0R3.
    ** IDP OS 5.1 was tested with Juniper Networks EX4200 running Junos OS 10.2R1.
    *** IDP OS 5.1 was tested with Cisco Catalyst C3500XL running version 12.0.

    State Sync Limitations

    When an IDP Series device receives network traffic, it sets up a flow of related packets so that it can inspect the network transaction for anomalies and attack signatures. When state synchronization between two IDP Series devices is enabled, the primary device sends TCP flow information to the standby IDP Series device whenever it sets up a new flow. As processing continues, the primary device sends application identification results to the standby device, populating the backup device application identification cache.

    In the event of failure along the primary path, the switch or firewall cuts over to the redundant path, and the standby IDP Series device begins receiving traffic. Table 2 describes limitations to state synchronization for the immediate “failover traffic” and for new sessions.

    Table 2: IDP Series HA Failover Cluster: Processing by the Standby Device

    Category

    Limitations

    Failover Traffic

    The initial load processed by the standby device might include retransmitted and midstream packets. Let’s call these packets “failover traffic.” Because the standby device has accumulated state sync data, it attempts to correlate the failover traffic packets with the session data. When processing failover traffic, the standby device is able to match and enforce APE rules, but the following limitations are expected:

    • Nested applications and custom applications. Due to a current limitation, the application cache results for nested applications and custom applications are not synchronized from primary to standby (PR 550567). Consequently, when the standby device performs APE rulebase processing for failover traffic, nested applications and custom applications are not identified using application identification feature methods. Instead, these are identified by service and standard port.
    • Intrusion detection. Neither flow-based or packet-based intrusion detection is possible (PR 559087). IDP rulebase rules cannot be enforced on failover traffic. The packets are passed through, uninspected.

    New Traffic

    When the standby device receives new sessions, it creates a new flow them and processes them no differently from the primary device. However, be aware of the following observations:

    • The IP Action table (such as IP block actions) is not synchronized. The standby device enforces its own IP Action table. Ultimately, this does not effect the security stance of the device. Instead of blocking the source IP immediately, the IDP Series device blocks the source IP after rule matching.
    • User session table. If you have implemented user-role-based policies, note that the user session table is not synchronized. Make sure you configure each device to receive user role information from the IC Series UAC device.

    Published: 2011-02-08