Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

show chassis hardware (View)

 

Syntax

Release Information

Command introduced in Junos OS Release 9.2. Command modified in Junos OS Release 9.2 to include node option.

Description

Display chassis hardware information.

Options

  • clei-models—(Optional) Display Common Language Equipment Identifier Code (CLEI) barcode and model number for orderable field-replaceable units (FRUs).

  • detail | extensive—(Optional) Display the specified level of output.

  • models—(Optional) Display model numbers and part numbers for orderable FRUs.

  • node—(Optional) For chassis cluster configurations, display chassis hardware information on a specific node (device) in the cluster.

    • node-id —Identification number of the node. It can be 0 or 1.

    • local—Display information about the local node.

    • primary—Display information about the primary node.

Required Privilege Level

view

Output Fields

Table 1 lists the output fields for the show chassis hardware command. Output fields are listed in the approximate order in which they appear.

Table 1: show chassis hardware Output Fields

Field Name

Field Description

Item

Chassis component—Information about the backplane; power supplies; fan trays; Routing Engine; each Physical Interface Module (PIM)—reported as FPC and PIC—and each fan, blower, and impeller.

Version

Revision level of the chassis component.

Part Number

Part number for the chassis component.

Serial Number

Serial number of the chassis component. The serial number of the backplane is also the serial number of the device chassis. Use this serial number when you need to contact Juniper Networks Customer Support about the device chassis.

Assb ID or Assembly ID

Identification number that describes the FRU hardware.

FRU model number

Model number of FRU hardware component.

CLEI code

Common Language Equipment Identifier code. This value is displayed only for hardware components that use ID EEPROM format v2. This value is not displayed for components that use ID EEPROM format v1.

EEPROM Version

ID EEPROM version used by hardware component: 0x01 (version 1) or 0x02 (version 2).

Description

Brief description of the hardware item:

  • Type of power supply.

  • Switch Control Board (SCB)

    Starting with Junos OS Release 12.1X47-D15 and Junos OS Release 17.3R1, the SRX5K-SCBE (SCB2) is introduced.

    • There are three SCB slots in SRX5800 devices. The third slot can be used for an SCB or an FPC. When an SRX5K-SCB was used , the third SCB slot was used as an FPC. SCB redundancy is provided in chassis cluster mode.

    • With an SCB2, a third SCB is supported. If a third SCB is plugged in, it provides intra-chassis fabric redundancy.

    • The Ethernet switch in the SCB2 provides the Ethernet connectivity among all the FPCs and the Routing Engine. The Routing Engine uses this connectivity to distribute forwarding and routing tables to the FPCs. The FPCs use this connectivity to send exception packets to the Routing Engine.

    • Fabric connects all FPCs in the data plane. The Fabric Manager executes on the Routing Engine and controls the fabric system in the chassis. Packet Forwarding Engines on the FPC and fabric planes on the SCB are connected through HSL2 channels.

    • SCB2 supports HSL2 with both 3.11 Gbps and 6.22 Gbps (SerDes) link speed and various HSL2 modes. When an FPC is brought online, the link speed and HSL2 mode are determined by the type of FPC.

    Starting with Junos OS Release 15.1X49-D10 and Junos OS Release 17.3R1, the SRX5K-SCB3 (SCB3) with enhanced midplane is introduced.

    • All existing SCB software that is supported by SCB2 is supported on SCB3.

    • SRX5K-RE-1800X4 mixed Routing Engine use is not supported.

    • SCB3 works with the SRX5K-MPC (IOC2), SRX5K-MPC3-100G10G (IOC3), SRX5K-MPC3-40G10G (IOC3), and SRX5K-SPC-4-15-320 (SPC2) with current midplanes and the new enhanced midplanes.

    • Mixed SCB use is not supported. If an SCB2 and an SCB3 are used, the system will only power on the master Routing Engine's SCB and will power off the other SCBs. Only the SCB in slot 0 is powered on and a system log is generated.

    • SCB3 supports up to 400 Gbps per slot with old midplanes and up to 500 Gbps per slot with new midplanes.

    • SCB3 supports fabric intra-chassis redundancy.

    • SCB3 supports the same chassis cluster function as the SRX5K-SCB (SCB1) and the SRX5K-SCBE (SCB2), except for in-service software upgrade (ISSU) and in-service hardware upgrade (ISHU).

    • SCB3 has a second external Ethernet port.

    • Fabric bandwidth increasing mode is not supported.

    Starting in Junos OS 19.3R1, SRX5K-SCB4 is supported on SRX5600 and SRX5800 devices along with SRX5K-SPC3.

    SRX5K-SCB4:

    • Interoperate with SRX5K-RE3-128G, SRX5K-RE-1800X4, IOC2, IOC3, IOC4, SPC2, and SPC3. SCB4 is compatible with all midplanes and interoperate with existing PEMs, fan trays, and front panel displays.

    • Does not interoperate with SCB, SCB2, and SCB3.

    • Supports 480-Gbps link speed per slot.

    • Supports 1-Gigabit Ethernet interfaces speed with SRX5K-RE-1800X4 and 1-Gigabit, 2.5-Gigabit, and 10-Gigabit Ethernet speeds with SRX5K-RE3-128G.

    • Support ISHU and ISSU in chassis cluster.

    • Supports fabric bandwidth mode and redundant fabric mode on SRX5600 and SRX5800 devices. The bandwidth mode is the new default mode which is necessary to configure redundant mode in setting up the chassis cluster successfully.

  • Type of Flexible PIC Concentrator (FPC), Physical Interface Card (PIC), Modular Interface Cards (MICs), and PIMs.

  • IOCs

    Starting with Junos OS Release 15.1X49-D10 and Junos OS Release 17.3R1, the SRX5K-MPC3-100G10G (IOC3) and the SRX5K-MPC3-40G10G (IOC3) are introduced.

    • IOC3 has two types of IOC3 MPCs, which have different built-in MICs: the 24x10GE + 6x40GE MPC and the 2x100GE + 4x10GE MPC.

    • IOC3 supports SCB3 and SRX5000 line backplane and enhanced backplane.

    • IOC3 can only work with SRX5000 line SCB2 and SCB3. If an SRX5000 line SCB is detected, IOC3 is offline, an FPC misconfiguration alarm is raised, and a system log message is generated.

    • IOC3 interoperates with SCB2 and SCB3.

    • IOC3 interoperates with the SRX5K-SPC-4-15-320 (SPC2) and the SRX5K-MPC (IOC2).

    • The maximum power consumption for one IOC3 is 645W. An enhanced power module must be used.

    • The IOC3 does not support the following command to set a PIC to go offline or online:

      request chassis pic fpc-slot <fpc-slot> pic-slot <pic-slot> <offline | online> .

    • IOC3 supports 240 Gbps of throughput with the enhanced SRX5000 line backplane.

    • Chassis cluster functions the same as for the SRX5000 line IOC2.

    • IOC3 supports intra-chassis and inter-chassis fabric redundancy mode.

    • IOC3 supports ISSU and ISHU in chassis cluster mode.

    • IOC3 supports intra-FPC and and Inter-FPC Express Path (previously known as services offloading) with IPv4.

    • NAT of IPv4 and IPv6 in normal mode and IPv4 for Express Path mode.

    • All four PICs on the 24x10GE + 6x40GE cannot be powered on. A maximum of two PICs can be powered on at the same time.

      Use the set chassis fpc <slot> pic <pic> power off command to choose the PICs you want to power on.

      Fabric bandwidth increasing mode is not supported on IOC3.

    • SRX Clustering Module (SCM)

    • Fan tray

    Starting in Junos OS Release 19.3R1, the SRX5K-IOC4-10G and SRX5K-IOC4-MRAT line cards are supported along with SRX5K-SPC3 on the SRX5000 series devices.

    SRX5K-IOC4-10G:

    • Interoperates with SCB3, SCB4, SRX5K-RE-1800X4, SRX5K-RE3-128G, SPC2, SPC3, IOC2,IOC3, and IOC4.

    • Supports 480-Gbps speed.

    • Supports 40X10GE Interfaces with SCB3.

    • 40 10-Gigabit Ethernet port provides 10-Gigabit Ethernet MACsec support.

    • Supports reth and aggregated interfaces on the chassis cluster.

    • Supports ISSU and logical system on the chassis cluster.

    • Does not support SCB2.

    • SRX5K-IOC4-MRAT with SCB3 supports 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet Interfaces.

  • For hosts, the Routing Engine type.

    Starting with Junos OS Release 12.1X47-D15 and Junos OS Release 17.3R1, the SRX5K-RE-1800X4 Routing Engine is introduced.

    • The SRX5K-RE-1800X4 has an Intel Quad core Xeon processor, 16 GB of DRAM, and a 128-GB solid-state drive (SSD).

      The number 1800 refers to the speed of the processor (1.8 GHz). The maximum required power for this Routing Engine is 90W.

      Note: The SRX5K-RE-1800X4 provides significantly better performance than the previously used Routing Engine, even with a single core.

    Starting in Junos OS Release 19.3R1, SRX5K-RE3-128G Routing Engine is supported along with SRX5K-SPC3 on the SRX5000 series devices.

    SRX5K-RE3-128G:

    • Provides improved control plane performance and scalability. SRX5K-RE3-128G has Intel’s Haswell-EP based processor with six cores.

    • Supports two 200G SSDs to store log files and 128-GB of memory for storing routing and forwarding tables and for other routing engines.

    • Interoperates with SCB3, SCB4, SRX5K-RE3-128G, SPC2, SPC3, IOC2, IOC3, and IOC4.

    • Does not support SCB2 and SRX5K-RE-1800X4.

show chassis hardware

show chassis hardware (SRX5800)

user@host> show chassis hardware

show chassis hardware (SRX5600 and SRX5800 devices for SRX5K-MPC)

user@host> show chassis hardware

show chassis hardware (with 20-Gigabit Ethernet MIC with SFP)

user@host> show chassis hardware

show chassis hardware(SRX5600 and SRX5800 devices with SRX5000 line SRX5K-SCBE [SCB2] and SRX5K-RE-1800X4 [RE2])

user@host> show chassis hardware

show chassis hardware(SRX5400, SRX5600, and SRX5800 devices with SRX5000 line SRX5K-SCB3 [SCB3] with enhanced midplanes and SRX5K-MPC3-100G10G [IOC3] or SRX5K-MPC3-40G10G [IOC3])

user@host> show chassis hardware

show chassis hardware (SRX4200)

user@host> show chassis hardware

show chassis hardware (vSRX 3.0)

Starting in Junos OS Release 20.1R1, when vSRX 3.0 performs resource management, the vCPUs and RAM available to the instance are assigned based on what has been allocated prior to launching the instance. A maximum of 32 cores will be assigned to SRXPFE, for flow processing. Any allocation of cores in excess of 32 will automatically be assigned to the Routing Engine. For example, if 36 cores are allocated to the VM during the creation process, 32 cores are assigned for flow processing and 4 cores will be assigned to the RE. For memory allocations, up to 64G of vRAM would be used by the SRXPFE. Any allocated memory in excess of 64G would be assigned to system memory and would not be used for maintaining flow sessions information.Recommended vCPU and vRAM CombinationsvCPU NumbervRAM Size (G)24589161732On a deployed vSRX, only memory scale up is supported. Scaling down memory on a deployed vSRX, is not supported. If you need to scale down memory, then a fresh install is required.
user@host> show chassis hardware

show chassis hardware clei-models

show chassis hardware clei-models(SRX5600 and SRX5800 devices with SRX5000 line SRX5K-SCBE [SCB2] and SRX5K-RE-1800X4 [RE2])

user@host> show chassis hardware clei-models node 1