Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

New and Changed Features

This section describes the new features and enhancements to existing features in Junos OS Release 15.1X49-D10 for the SRX Series.

Hardware Features

Security

  • Enhanced support for Switch Control Board and Modular Port Concentrators–Starting with Junos OS Release 15.1X49-D10, the SRX5400, SRX5600, and SRX5800 Services Gateways support the third-generation Switch Control Board SRX5K-SCB3 (SCB3) and the Modular Port Concentrator (IOC3): SRX5K-MPC3-40G10G and SRX5K-MPC3-100G10G. These cards provide superior carrier-grade network performance and chassis cluster features, and greater throughput, interface density, Application Layer performance, and scalability. The SCB3 provides higher capacity traffic support, greater link speeds and fabric capacity, and improved services. The IOC3s enable faster processing and provide line rates of up to 240 Gbps per slot.

    [See Switch Control Board SRX5K-SCB3, SRX5K-MPC3-40G10G, and SRX5K-MPC3-100G10G.]

Software Features

Flow-Based and Packet-Based Processing

  • Express Path (formerly known as services offloading) on the SRX5000 line IOC3—Starting with Junos OS Release 15.1X49-D10, the SRX5K-MPC3-100G10G (IOC3) and the SRX5K-MPC3-40G10G (IOC3) support Express Path.

    Express Path is a mechanism for processing fast-path packets in the Trio chipset instead of in the SPU. This method reduces the long packet-processing latency that arises when packets are forwarded from network processors to SPUs for processing and back to IOCs for transmission.

    To achieve the best latency result, both the ingress port and egress port of a traffic flow need to be on the same XM chip of the IOC3.

    Note: XL chip flow table lookup occurs only in ingress. Egress datapath packet handling is the same as supported in the previous release.

    Note: The services offloading feature is renamed to Express Path starting in Junos OS Release 12.3X48-D10. Currently, the documents still use the term services offloading.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

  • Fragmentation packet ordering using session cache—Starting with Junos OS Release 15.1X49-D10, the IOCs (SRX5K-MPC [IOC2], SRX5K-MPC3-100G10G [IOC3], and SRX5K-MPC3-40G10G [IOC3]) on SRX5400, SRX5600, and SRX5800 devices support fragmentation packet ordering using the session cache.

    A session can consist of both normal and fragmented packets. With hash-based distribution, 5-tuple and 3-tuple keys can be used to distribute normal and fragmented packets to different SPUs. All the session packets are forwarded to the SPU. Due to latency, the SPU might not guarantee packet ordering. Session cache on the IOCs ensures fragmentation ordering.

    A session cache entry is allocated for normal packets of the session and the 5-tuple key is used to find the fragmented packet. When the first fragmented packet is received, the IOC updates the session cache entry. The IOC forwards all subsequent packets to the SPU to ensure fragmentation packet ordering.

    To enable session cache on the IOC, you need to run the set chassis fpc <fpc-slot> np-cache command.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

  • Hash-based forwarding on the SRX5K-MPC3-40G10G (IOC3) and SRX5K-MPC3-100G10G (IOC3)—Starting with Junos OS Release 15.1X49-D10, hash-based datapath packet forwarding is supported on the IOC3 to interconnect with all existing IOC and SPC cards for SRX5400, SRX5600, and SRX5800 devices.

    The IOC3 XL chip uses a hash-based method to distribute ingress traffic to a pool of SPUs by default. Selection of hash keys depends on application protocols.

    On a high-end SRX Series device, a packet goes through a series of events involving different components from ingress to egress processing. With the datapath packet forwarding feature, you can obtain quick delivery of I/O traffic over the SRX5000 line of devices.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

  • IPsec VPN session affinity—Starting with Junos OS Release 15.1X49-D10, the IOCs (SRX5K-MPC [IOC2], SRX5K-MPC3-100G10G [IOC3], and SRX5K-MPC3-40G10G [IOC3]) on SRX5400, SRX5600, and SRX5800 devices support IPsec session affinity for IPsec tunnel-based traffic.

    IOCs improve the flow module to create sessions for IPsec tunnel-based traffic on its tunnel-anchored SPU, and installs the session cache on the IOCs. The IOCs can then redirect the packets directly to the same SPU and thereby minimizing the packet forwarding overhead.

    Note: To enable session cache on the IOC, you need to run the set chassis fpc <fpc-slot> np-cache command.

    To enable IPsec VPN affinity, use the set security flow load-distribution session-affinity ipsec command.

    [See VPN Feature Guide for Security Devices PDF Document.]

  • Session cache and selective installation of session cache—Starting with Junos OS Release 15.1X49-D10, the IOCs (SRX5K-MPC [IOC2], SRX5K-MPC3-100G10G [IOC3], and SRX5K-MPC3-40G10G [IOC3]) on SRX5400, SRX5600, and SRX5800 devices support session cache and selective installation of session cache.

    Session cache is used to cache a conversation between the network processor (NP) and the SPU on an IOC. A conversation could be a session, GTP-U tunnel traffic, IPsec VPN tunnel traffic, and so on. A conversation has two session cache entries, one for incoming traffic and the other for reverse traffic.

    The session cache table is extended to support the NP sessions as well. Express Path (formerly known as services offloading) traffic and the NP traffic share the same session cache table on the IOCs. The session cache on the IOC leverages the Express Path functionality.

    To optimize system resources and conserve session entries on IOCs, certain priority mechanisms are applied to both flow and the IOCs to selectively install the session cache.

    To enable session cache on the IOC you need to run the set chassis fpc <fpc-slot> np-cache command.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

Interfaces and Chassis

  • SRX5K-MPC3-40G10G (IOC3) and SRX5K-MPC3-100G10G (IOC3) —Starting with Junos OS Release 15.1X49-D10, the SRX5K-MPC3-40G10G (IOC3) and the SRX5K-MPC3-100G10G (IOC3) are introduced for SRX5400, SRX5600, and SRX5800 devices.

    These IOC3s provide the powerful SRX5000 line devices with superior networking and carrier grade chassis cluster features, interface density (scalable and upgradable), and high performance. Both IOC3s support up to an aggregated 240-Gbps IMIX throughput per slot, latency less than 10 microseconds, and higher Layer 7 (L7) performance.

    The two types of IOC3 MPCs, which have different built-in MICs, are the 24x10GE + 6x40GE MPC and the 2x100GE + 4x10GE MPC.

    The IOC3s do not support the following command to set a PIC to go offline or online:

    request chassis pic fpc-slot <fpc-slot> pic-slot <pic-slot> <offline | online> CLI command.

    All four PICs on the 24x10GE + 6x40GE cannot be powered on. A maximum of two PICs can be powered on at the same time.

    Use the set chassis fpc <slot> pic <pic> power off command to choose the PICs you want to power on.

    Note: Fabric bandwidth increasing mode is not supported on the IOC3.

    Warning:

    On SRX5400, SRX5600, and SRX5800 devices in a chassis cluster, when the PICs containing fabric links on the SRX5K-MPC3-40G10G (IOC3) are powered off to turn on alternate PICs, always ensure that:

    • The new fabric links are configured on the PICs that are turned on. At least one fabric link must be present and online to ensure minimal RTO loss.
    • The chassis cluster is in active-backup mode to ensure minimal RTO loss, once alternate links are brought online.
    • If no alternate fabric links are configured on the PICs that are turned on, RTO synchronous communication between the two nodes stops and the chassis cluster session state will not back up, because the fabric link is missing. You can view the CLI output for this scenario indicating a bad chassis cluster state by using the show chassis cluster interfaces command.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

  • Switch Control Board SRX5K-SCB3 (SCB3) with enhanced midplanes—Starting with Junos OS Release 15.1X49-D10, the SRX5K-SCB3 (SCB3) with enhanced midplanes is introduced for SRX5400, SRX5600, and SRX5800 devices.

    The SCB3 provides the powerful SRX5000 line devices with superior networking and carrier grade chassis cluster features, interface density (scalable and upgradable), and high performance. The IOC3s support up to an aggregated 240-Gbps IMIX throughput per slot. To support this high throughput per slot, the SCB3 and enhanced midplanes are required to guarantee full-bandwidth connection.

    The SCB3 works only with the SRX5K-RE-1800X4(RE2), SRX5K-MPC (IOC2), the SRX5K-SPC-4-15-320 (SPC2), the SRX5K-MPC3-40G10G (IOC3), and the SRX5K-MPC3-100G10G (IOC3), with the standard midplanes and the enhanced midplanes.

    The SCB3 does not support mixed Routing Engines and SCBs, in-service software upgrade (ISSU), in-service hardware upgrade (ISHU), or fabric bandwidth increasing mode.

    To request that an SRX5K-SCB3 go online or offline, use the request chassis cb (offline | online) slot slot-number CLI command.

    [See Flow-Based and Packet-Based Processing Feature Guide for Security Devices PDF Document.]

Layer 2 Features

  • Layer 2 next-generation CLI—Starting with Junos OS Release 15.1X49-D10, only Layer 2 next-generation CLI configurations are supported on SRX5400, SRX5600, and SRX5800 devices. The legacy Layer 2 transparent mode (Ethernet switching) configuration statements and operational commands are not supported.

    Use the SRX L2 Conversion Tool to convert the legacy Layer 2 CLI configurations to Layer 2 next-generation CLI configurations. The SRX L2 Conversion Tool is available for registered customers to help them become familiar with the Layer 2 next-generation CLI and to quickly convert existing switch-based CLI configurations to transparent mode CLI configurations.

    The SRX L2 Conversion Tool is available at https://www.juniper.net/support/downloads/?p=srx5400#sw .

    For more information, refer to the Knowledge Base article at http://kb.juniper.net/InfoCenter/index?page=content&id=KB30445 .

    [See Layer 2 Bridging and Transparent Mode for Security Devices PDF Document.]

Related Documentation

Modified: 2016-12-21