Resolved Issues

The following table describes issues that are resolved when you upgrade to from IDP OS 5.0r2 to IDP OS 5.1r1. If you are upgrading from IDP OS 5.0r1 or IDP OS 4.1r4, read the release notes for the subsequent releases to learn about the issues that were resolved in them.

Table 4: Resolved Issues



Previously Unsupported Functionality


IDP OS 4.1r4 did not support peer port modulation for 1 gigabyte fiber I/O modules. Beginning with IDP OS 5.0r2 (and continuing with IDP OS 5.1r1), we do support peer port modulation for 1 gigabyte fiber I/O modules.

Unexpected Behavior


Resolved an issue where the SYN Protector rulebase had failed to reset the destination server connections when configured in Passive mode.


Resolved an issue that had caused speed and duplex settings to be auto-negotiated, even if auto-negotiation was not configured. The issue had occurred on IDP200, IDP6000, and IDP1100.


Resolved an issue where the close client action had not functioned when processing VLAN tagged MPLS traffic.


Resolved an issue where VLAN Q in Q traffic had not been distributed among the IDP engines (IDP8200).


Resolved an issue with the Radius PAM module that had resulted in Radius authentication for SSH to fail.


Resolved an issue with idpLogReader debug logs where the IP address bits had been displayed in reverse order. NSM displayed the logs correctly.

Configuration Issues


Resolved an issue where ACM had rejected Radius username formats containing a period (for example, john.doe).

Detection Accuracy


Resolved an issue where APE rules could behave unexpectedly. If you configured a rule to drop Telnet traffic, for example, all traffic running over the standard Telnet port (port 23) would be dropped.


Improved accuracy detecting attacks in highly fragmented HTTP traffic.

Logging / Packet Capture


All formats: Corrected log messages when an IDP rulebase rule matches ICMP or UDP attacks and the rule action is set to close client and server. The action actually taken is a drop connection. In previous releases, the log had been the action specified in the rule—“close client and server”. In this release, we now report the action actually taken by the IDP Series device—”drop connection”.


Packet capture: You cannot use tcpdump to capture packets in both directions. In IDP OS Release 5.1, we support a new utility, called jnetTcpdump, that you can use to capture packets in both directions.


Changed threshold: When traffic through the IDP Series device exceeds session capacity, the device generates an event log and drops the traffic (if the constant for logging implicit drops is enabled). To avoid generating many logs around a similar event, the IDP Series device does not log additional instances until a threshold is reached. In this release, we have changed the delay threshold from 1024 to 100 instances.


Syslog: NIC state events reported in syslog messages had not indicated that the virtual router has returned to “Normal mode”.


Syslog: Changes in link status (link down or link up) had not been reported in syslog messages.


NSM Profiler: Updates to Network Profile tab logs had lagged behind Protocol Profile tab logs. These two views are now updated simultaneously.


NSM Log Viewer: Resolved an issue where variable data had not been displayed in the NSM Log Viewer collection.


SNMP: The SNMP trap jnxIdpSensorFreeDiskSpace had been generated when the disk space exceeds the threshold but a downtrap had not been generated when it fell below the threshold.


SNMP: In IDP OS 5.0r2 release notes, we reported that we had changed the polling interval for SNMP traps and SNMP polling to five minutes to decrease latency and CPU utilization for single core platforms (IDP600, IDP200, IDP75), where the IDP engine, JNET driver, and control plane processes share the same CPU.

SNMP reporting has been improved in IDP OS Release 5.1. For single core platforms, CPU utilization is reported at 5 seconds, 1 minute, and 5 minutes. Traps are sent for the 1 minute and 5 minute intervals.


Resolved an issue where the packet reassembly module had generated an inordinate number of logs for the same issue, leading to disk usage concerns.

CPU Utilization


Resolved an issue where we had reported incorrect CPU utilization for single core platforms (IDP600, IDP200, IDP75). For single core platforms, you can now use the Linux top command to query CPU utilization. This value is reported to SNMP but not to NSM. For multicore platforms, you use the scio idp-cpu-utilization command and not the Linux top command.


Resolved an issue where, if the IDP OS services were restarted while the device was processing traffic, the scio idp-cpu-utilization query returned 0 (an incorrect value).


Resolved an issue on IDP8200 where IDP engine CPU load had been incorrectly reported as 0%.



Resolved an issue where the autorecovery feature had failed to restart an IDP engine in a hung state.


We have changed the timeout for a TCP session marked for flow bypass to 60 seconds (was 5 seconds).


Resolved an issue where the autorecovery process incorrectly considered the IDP engine to be in a hung state and consequently terminated the IDP engine. This had occurred during "All Attacks" policy push.


Resolved an issue found in stress testing where continuously pushing a policies with APE rules would eventually result in policy push errors.


Resolved an issue where running scio cpu-utilization command in single core platforms caused a drop in throughput and increase in latency.


Resolved an issue where there had been a decrease in free packets after the auto-recovery process restarted the IDP engine (IDP1100, IDP600, IDP200 only).


Improved code so that a core dump is generated more often when IDP engine crashes. However, under low memory conditions, a core dump might not be generated.


Resolved an issue where memory had not been freed after successive policy pushes.


Resolved an issue that had caused a kernel panic after reboot.


Resolved an issue where the command to disable protocol decoding scio const -d set PROTOCOLNAME 0 had resulted in the device dropping traffic rather than passing it through as intended.


Resolved an issue that had killed the autorecovery process before recovery was completed.


Resolved a memory issue that had caused a detector engine update to fail when the security policy was large (IDP75).


Changed implementation to avoid a memory leak issue that had been reported in 5.0r2.


Resolved an issue where time updates from an NTP server stopped working after installing a patch release.



Improved UDP throughput.

Is this true or do we now have a degradation 574506 to report? I recommend leaving performance claims out of release notes. PLM shares performance results in with SEs to use with prospective customers and with JTAC to use when working with supported customers.


Improved UDP latency.

Is this true?


Improved latency on single core platforms (IDP600, IDP200, IDP75).