Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    IDP Series Network Interfaces Overview

    In Figure 1, eth0, eth1, eth2, eth3, and so forth are the network interfaces.

    Figure 1: IDP Series Network Interfaces

    Image g036629.gif

    The following topics explain the features of these network interfaces:

    Management Interface (eth0)

    In Figure 1, eth0 is a dedicated management interface used for communication with Network and Security Manager (NSM). The agent process is a control plane process. It manages communication between the IDP Series device and NSM. The agent process handles the following functionality:

    • Device configuration—You set part of the active configuration with the Appliance Configuration Manager (ACM), part with the CLI, and part with NSM. The agent process pushes changes you make from NSM to the IDP Series device.
    • Security policy—You configure policies with NSM. You push a single policy to the IDP Series device to be installed and used by the IDP process engines. The installed policy is the policy used to determine which traffic the IDP engine inspects, what to look for, and what actions to take.
    • Detector engine—The IDP detector engine is a code base that contains the application signatures and protocol decoder definitions used by the IDP engine in packet analysis. J-Security Center periodically updates the IDP detector engine. In Figure 1, note the process flow: first, you download updates from J-Security Center to NSM; then, you push updates from NSM to IDP Series devices.
    • Attack database—The attack database includes the attack objects used by the IDP rulebase to match attack signatures and protocol anomalies. J-Security Center updates predefined attack object definitions as often as necessary. As with detector engine updates, you download them from J-Security Center to NSM and then push them from NSM to the IDP Series device.
    • Logging—The IDP process engines generate logs and packet captures related to security policy and application policy enforcement rules. The Profiler generates profiling and application volume logs. The agent process sends these logs to NSM so you can use NSM monitoring features to monitor security events and application usage.

    High Availability Interface (eth1)

    In Figure 1, eth1 is a dedicated high availability (HA) interface used for sync-state communication with a cluster peer in a high availability deployment.

    Traffic Interfaces

    In Figure 1, eth2, eth3, and so forth are the network interfaces you connect to the network devices that route traffic in your network.

    The IDP Series implements the following abstract objects to manage network interfaces:

    • Virtual circuit—A virtual circuit corresponds with the physical interface. For example, physical interface eth2 is a virtual circuit. You use the Appliance Configuration Manager (ACM) to configure speed and duplex, as well as optional interface alias settings for each interface.
    • Virtual router—A virtual router contains a logical pair of virtual circuits. For example, virtual router vr0 contains eth2 and eth3. In transparent mode, traffic arrives in one interface and is forwarded through the other. You use ACM to configure the deployment mode (sniffer or transparent) and bypass options (internal, external, or off) for each virtual router. You can use the command-line interface to display information and status for each virtual router, including Address Resolution Protocol (ARP) and media access control (MAC) tables.
    • Subscriber—A single subscriber named s0 contains all virtual routers. The subscriber maintains process and status of all traffic that flows through the device. You can use the command-line interface to view information and status maintained by subscriber s0. We test and support only configurations where the default subscriber is used.

    Internal Bypass

    The Internal Bypass feature is intended for deployments where a network security policy privileges availability over security. In the event of failure or graceful shutdown, traffic bypasses the IDP processing engine and is passed through the IDP Series device uninspected.

    The Internal Bypass feature operates through a timing mechanism. When enabled, the timer on traffic interfaces counts down to a bypass trigger point. When the IDP Series appliance is turned on and available, it sends a reset signal to the traffic interface timer so that it does not reach the bypass trigger point. If the IDP OS encounters failure, then it fails to send the reset signal, the timer counts down to the trigger point, and the traffic interfaces enter a bypass state. If the IDP Series appliance is shut down gracefully, the traffic interfaces immediately enter bypass.

    With copper NICs, the bypass mechanism joins the interfaces mechanically to form a circuit that bypasses IDP processing. Packets traverse the IDP Series device as if the path from eth2 (receiving interface) to eth3 (transmiting interface) were a crossover cable. No packet inspection or processing occurs.

    With fiber NICs, the bypass mechanism uses use optical relays instead of copper relays. During normal operations, the optical relays send light to the built-in optical transceivers. When bypass is triggered, the relays flip state, and the light signal is redirected to optically connect the two external ports.

    Figure 2 compares the data path when Internal Bypass is enabled but not activated with the data path when Internal Bypass is activated.

    Figure 2: Internal Bypass

    Image g036630.gif

    When the IDP operating system resumes healthy operations, it sends a reset signal to the traffic interfaces, and the interfaces resume normal operation.

    External Bypass

    The External Bypass setting supports third-party external bypass units. Deployments with external bypass units depend on the functionality of the external bypass unit to check the status of the IDP Series appliance and make the determination whether to send packets through or around the IDP Series device. Most external bypass units test for availability by sending heartbeat packets through the device. If the packets reach the expected destination, the external bypass unit allows the traffic to continue through the IDP Series appliance. If the packets fail to reach the expected destination, the external bypass unit determines the IDP Series is unavailable, so it forwards traffic around the IDP Series device. The IDP Series supports external bypass solutions by allowing the heartbeat traffic to pass through the device regardless of the Layer 2 Bypass setting. In other words, if you disable Layer 2 Bypass and enable External Bypass, most Layer 2 traffic will be dropped but the heartbeat traffic used in the external bypass deployment will be passed through. Figure 3 compares the data path when External Bypass is enabled but not activated with the data path when External Bypass is activated.

    Figure 3: External Bypass

    Image g036632.gif

    Interface Signaling

    The interface signaling feature supports high-availability deployments where there are redundant network paths, and a firewall, router, or switch chooses the active path. The interface signaling script monitors the state of the following IDP Series components:

    • Traffic interfaces (eth2, eth3, and so on). In case of interface failure, the script brings down all peer interfaces so that a third-party link detection mechanism can properly detect failure.
    • IDP engines (idpengine0, idpengine1, and so on). In case of IDP engine failure, the auto-recovery feature attempts to restart the IDP engine. If the IDP engine cannot be restarted after six attempts, the auto-recovery process runs an idp.sh stop command. The interface signaling script then brings down all traffic interfaces.

    After bringing down the peer interfaces, the interface signaling script sleeps for 30 seconds to avoid link flapping issues. After 30 seconds, the script checks the state of the IDP engine or interface that had encountered the failure. When the underlying problem has been resolved and the interface is up, the interface signaling script brings up the peer interfaces.

    Peer Port Modulation

    The peer port modulation (PPM) feature supports deployments where routers monitor link state to make routing decisions. In these deployments, a router might be set to monitor link state on only one side of the IDP Series device. Suppose, for example, the router monitors only the IDP inbound interface. Suppose the inbound interface remains up but the outbound interface goes down. The router watching the inbound link would detect an available link and forward traffic to the IDP Series device. Traffic would be dropped at the point of failure—the outbound link. PPM propagates a link loss state for one traffic interface to all interfaces in the IDP virtual router.

    When PPM is enabled, a PPM daemon monitors the health of IDP traffic interfaces belonging to the same virtual router. If a traffic interface loses link, the PPM process turns off any associated network interfaces in the same virtual router so that other network devices detect that the virtual router is down and route around it. For example, assume you have enabled PPM and configured IDP virtual routers as shown in Figure 4.

    Figure 4: Peer Port Modulation

    Image g036631.gif

    Suppose there is a network problem and eth3 goes down. The PPM daemon detects this and turns off the other interface in vr0: eth2. The interfaces in vr1, vr2, and vr3 are unaffected. After the you fix the problem with eth3, the PPM daemon detects this, and turns on eth2.

    Note: The PPM feature is independent of the bypass feature (NIC state setting). PPM is related to the status of the link, not the status of the IDP operating system. A link can be down even when the IDP operating system is healthy. Note, however, that PPM runs as a control plane process and operates only when the IDP Series device is turned on and the control plane is available. If the IDP operating system is unavailable, the PPM feature is also unavailable, regardless of the setting for the NIC state.

    Published: 2011-04-26