Hardware
-
PTX10K-LC1301-36DD line card (PTX10004)—PTX10004 routers support PTX10K-LC1301-36DD line cards. The line card features 36 ports, delivering a line rate throughput of 28.8 Tbps. The 36 high-density 800-Gigabit Ethernet (800 GbE) QSFP-DD ports support speeds of up to 800 Gbps. The line card houses two custom ASICs, and each ASIC comprises two Packet Forwarding Engines.
-
PTX10K-LC1301-36DD line card (PTX10008 routers with PTX10008-SF3 Switch Fabric)—PTX10008 routers with PTX10008-SF3 switch fabric support PTX10K-LC1301-36DD line cards. The line card features 36 ports. The 36 high-density 800-Gigabit Ethernet (800 GbE) QSFP-DD ports support speeds of up to 800 Gbps. The line card houses two custom ASICs, and each ASIC comprises two Packet Forwarding Engines.
-
Supported transceivers, optical interfaces, and DAC cables (PTX10002-36QDD)—Select your product in the Hardware Compatibility Tool to view supported transceivers, optical interfaces, and direct attach copper (DAC) cables for your platform or interface module. We update the HCT and provide the first supported release information when the optic becomes available.
-
PTX10K-LC1301-36DD line card (PTX10008)—The PTX10K-LC1301-36DD line card features 36 ports, delivering a line rate throughput of 28.8 Tbps. The 36 high-density 800-Gigabit Ethernet (800 GbE) QSFP-DD ports support speeds of up to 800 Gbps. The line card houses two custom ASICs, and each ASIC comprises two Packet Forwarding Engines.
-
New JNP10008-SF5 SIB (PTX10008)—The JNP10008-SF5 Switch Interface Board (SIB) supports up to 28.8 Tbps of bandwidth per slot for the PTX10KLC1301-36DD line card installed in a PTX10008 router running Junos OS Evolved.
[See PTX10008 Switch Fabric.]
-
Table 1: Features Supported on PTX10K-LC1301-36DD line card for PTX10008 Routers Feature
Description
Chassis
Packet Forwarding Engine resiliency . We provide resiliency feature support for the Packet Forwarding Engine, which enables the system to detect, report, and take action on Packet Forwarding Engine faults. Actions are taken based on the default configuration or a user configuration available for the errors.
Fabric hardening and resiliency support on PTX10K-LC1301-36DD line cards.
-
Interoperability support and CLI enhancements. The PTX10008 router with the JNP10008-SF5 SIB supports default interoperability between the PTX10K-LC1301-36DD, PTX10K-LC1201-36CD, and PTX10K-LC1202-36MR line cards. Use the
set chassis interoperability express5-enhancedcommand to bring up the system in theexpress5mode-specific functionalities. This disables the line-card interoperability feature. You can verify the interoperability status using theshow chassis interoperabilitycommand.The existing commands for PTX10008 with PTX10K-LC1201-36CD line card will support for PTX10008 with PTX10K-LC1301-36DD line card as well. Following are the new CLI command updates:
- The
show chassis fpc slot detailcommand displays the Packet Forwarding Engine ASIC type. - In the
set chassis fpccommand, you must usepfe-instanceinstead ofpfe. - The
show chassis fpc 5 pfe-instance allcommand displayspfe-instancein the output.
[See interoperability, show chassis interoperability, chassis, and show chassis fpc.]
- The
-
Fabric resiliency support for JNP10008-SF5 SIB. The JNP10008-SF5 SIB supports fabric resiliency, enhancing fault management for fabric links. You can benefit from features including error detection, logging, alarm generation, SNMP trap sending, LED error indications, and self-healing. Use the CLI command
show system errors active detailto view logged errors, ensuring comprehensive fault monitoring and increased system reliability.[See Fabric Resiliency and show system errors active.]
-
FPC fabric management for JNP10008-SF5 SIB. You can use the CLI command
set chassis fpcto manage FPC online and offline states gracefully. Use theset chassis fabric event reachability-faultcommand to configure options for detecting fabric reachability faults and trigger automatic connectivity restoration. Additionally, use theextendedkeyword inshow chassis fabric fpcsandshow chassis fabric sibscommands to view detailed link information within planes, and identify partially enabled planes with theDegradedkeyword in theshow chassis fabric fpcscommand.[See reachability-fault, show chassis fabric fpcs, and show chassis fabric sibs.]
-
Support for JNP10008-SF5 SIB (PTX10008) —The PTX10008 supports the JNP10008-SF5 Switch Interface Board (SIB), which includes 18 fabric planes. You can use the
extendedkeyword with theshow chassis sibscommand to view detailed plane information. Use theset chassis sibcommand to gracefully bring SIBs online or offline. Note that mixing JNP10008-SF3 SIB with JNP10008-SF5 SIB will result in compatibility errors indicated by specific CLI commands:-
The
show chassis sibsdetail command displaysIncompatible with other SIBsin the output. -
The
show chassis alarmscommand displaysSIB Incompatiblein the output. -
The
request chassis sib onlinecommand displaysRequest failed since Fru is incompatible with other slots!in the output.
[See show chassis sibs, show chassis alarms, request chassis sib, and Fabric Management on PTX10K Devices.]
-
-
Optics EM policy support. The Environment Monitoring (EM) policy includes optics temperature sensors for PTX10008 routers with the PTX10K-LC1301-36DD line card. It ensures efficient thermal management of high-power optical modules. Key functionalities include temperature monitoring integration, automatic shutdown procedures, and CLI commands for managing and configuring the EM policy.
[See Optics EM Policy Support.]
Class of service (CoS)
-
Support for CoS features, including classifiers (behavior aggregate (BA), fixed, and multifield (MF)), rewrite rules, forwarding classes, loss priorities, transmission scheduling, rate control, drop profiles, HCoS, and policy map .
-
[See CoS Features and Limitations on PTX Series Routers and Class of Service.]
-
Support for on-chip queue buffer for PFC-enabled queues. A priority-based flow control (PFC)-enabled queue with a buffer-size less than 450 microseconds is viewed and installed as a PFC-enabled on-chip queue. When a queue is in PFC on-chip mode, the entire virtual output queue (VOQ) buffer is always on-chip and is not scaled based on bandwidth usage.
[See buffer-size (Schedulers).]
Dynamic Host Configuration Protocol (DHCP)
-
Support for DHCPv4 relay agent and DHCPv6 relay agent, including:
-
DHCP relay: Layer 3 (L3) interfaces
-
DHCP relay: Option 82 for Layer 2 VLANs
-
DHCP relay: Option 82 for L3 interfaces
-
Extended DHCP relay agent
-
Virtual router-aware DHCP (VR-aware DHCP)
-
EVPN
-
Support for EVPN-VXLAN Layer 2 (L2) gateways and Layer 3 (L3) gateways with EVPN Type 5 routes.
[See EVPN User Guide.]
-
Support for ping and traceroute for EVPN-VXLAN
[See Understanding Overlay ping and traceroute Packet Support.]
-
Support for Static VXLAN (L2 gateway).
[See See Static VXLAN.]
Support for EVPN-MPLS L2 and L3 features.
[See EVPN Overview.]
Support for EVPN-VPWS.
Infrastructure
-
We support the following IP and Infrastructure features:
-
Junos telemetry interface (JTI) support for Packet Forwarding Engine sensors for usage, network processing unit (NPU) memory, NPU utilization, and pipeline NPU and ASIC. Using the JTI, you can export statistics using remote procedure call (gRPC) services, gRPC Network Management Interface (gNMI) services, and UDP transport.
Use these sensors:
-
/junos/system/linecard/packet/usage/
-
/junos/system/linecard/npu/memory/
-
/junos/system/linecard/npu/utilization/
-
/components/component/integrated-circuit/state/
-
/components/component/integrated-circuit/pipelinecounters/
For pipeline sensors, the four packet and drop counter categories are interface, lookup, queuing, and host interface.
-
-
Traffic drops classification based on trap classification.
-
Support for distributed denial-of-service (DDoS) IS-IS classification and higher DDoS bandwidth for Layer 2 and Layer 3 protocols.
[See show ddos-protection protocols isis and protocols (DDoS).]
-
Support for load balancing under the
[edit forwarding-options enhanced-hash-key]hierarchy. Load balancing includes:-
GRE key inclusion for transit IPv4 and IPv6 traffic
-
IP Layer 3 fields
-
IP Layer 4 fields
-
IPv6 flow label inclusion
-
MPLS labels
-
MPLS port data
-
MPLS pseudowire traffic
-
Tunnel endpoint identifier (TEID) inclusion in GPRS tunneling protocol (GTP) packets
-
RSVP-TE load balancing in proportion to LSP bandwidth
[See enhanced-hash-key.]
Support for 128-way equal-cost multipath (ECMP) routing for MPLS transit cases.
The following features do not support 128-way ECMP:
-
Multicast
-
P2MP
-
MC-LAG
-
Weighted unilist
-
Consistent hashing
-
Link protection (MPLS)
-
Adaptive load balancing
-
Class-based forwarding
-
-
Support for classification override configured under a forwarding policy.
[See CoS Features and Limitations on PTX Series Routers and Overriding the Input Classification.]
-
You can configure passive monitoring on any interface on the PTX10008 routers to monitor MPLS-encapsulated packets.
-
Interfaces and chassis
-
Support for VRRP. The following features are not supported for VRRP on Junos OS Evolved:
-
ISSU
-
Proxy ARP
-
MC-LAG
-
Distribution support on aggregated Ethernet interfaces
-
IRB
-
Inline delegation
[See Understanding VRRP.]
-
-
Support for the following protocols:
-
LAG (aggregated Ethernet)
-
LACP
-
LLDP
-
-
Support for link fault management (LFM). We support IEEE 802.3ah OAM LFM to monitor point-to-point Ethernet links that are connected either directly or through Ethernet repeaters. The following LFM features are supported:
-
Link discovery with active and passive modes
-
Detect-LOC
-
Remote loopback
-
Loopback tracking
-
Action profile
-
GRES and non-graceful Routing Engine switchover
-
We support the following optics:
-
800GbE
-
400GbE
-
100GbE/2x100GbE
-
10GbE/25GbE/40GbE
-
We support MAC address accounting for 10GbE, 40GbE, 100GbE, 200GbE, 400GbE, and 800GbE interfaces.
-
Support for MAC accounting for source and destination MAC addresses for Layer 3 interfaces and aggregated Ethernet interfaces. To enable MAC accounting, use the existing
mac-learn-enablecommand at the[edit interfaces interface-name gigether-options ethernet-switch-profile]or[edit interfaces aex aggregated-ether-options ethernet-switch-profile]hierarchy level.
IP tunneling
-
Support for the following Packet Forwarding Engine tunnel features:
-
Filter-based GRE encapsulation and de-encapsulation and filter-based MPLS-in-UDP de-encapsulation. We've enabled the following encapsulation and de-encapsulation workflow:
An incoming packet matches a filter term with an encapsulate action. The packet is encapsulated in an IP+GRE header and is forwarded to the endpoint's destination.
set firewall tunnel-end-point tunnel-name ipv4|ipv6 source-address address set firewall tunnel-end-point tunnel-name ipv4|ipv6 destination-address address set firewall tunnel-end-point tunnel-name gre set firewall family inet|inet6 filter name term name from source-address address set firewall family inet|inet6 filter name term name then encapsulate tunnel-name set firewall family inet|inet6 filter name term last then accept set interfaces interface-name unit number family inet|inet6 filter input set interfaces interface-name unit number family inet|inet6 address address # This source address differs from the one for the tunnel endpoint.
At the destination, the packet matches a filter term with a de-encapsulate action. The GRE header or MPLS-in-UDP header is stripped from the packet. The inner packet is routed to its destination.
set firewall family inet|inet6 filter name term name from source-address address set firewall family inet|inet6 filter name term name from protocol gre set firewall family inet|inet6 filter name term name then decapsulate gre # Optionally de-encapsulate mpls-in-udp. set firewall family inet|inet6 filter name term last then accept set interfaces interface-name unit number family inet|inet6 filter input filter-name set interfaces interface-name unit number family inet|inet6 address address # This is the destination address.
[See Components of Filter-Based Tunneling Across IPv4 Networks and tunnel-end-point.]
-
Support for FTI-based encapsulation and de-encapsulation of IPv4 and IPv6 packets. You can configure IP-IP encapsulation and de-encapsulation on flexible tunnel interfaces (FTIs). The default mode is loopback encap mode. Use the
bypass-loopbackstatement at the[edit interfaces fti number unit logical-unit-number tunnel encapsulation ipip]hierarchy level to change into flattened encap mode to achieve line-rate performance.[See Tunnel and Encryption Services Interfaces User Guide for Routing Devices.]
-
Support for configuring MPLS protocols over FTI tunnels, thereby transporting MPLS packets over IP networks that do not support MPLS. GRE and UDP tunnels support the MPLS protocol for both IPv4 and IPv6 traffic. You can configure encapsulation and de-encapsulation for the GRE and UDP tunnels. To allow the MPLS traffic on the UDP tunnels, include the
mpls port-numberstatement at the [edit forwarding-options tunnels udp port-profile profile-name] hierarchy level. To allow the MPLS traffic on the GRE tunnels, include themplsstatement at the [edit interfaces fti0 unit unit family] hierarchy level.
-
-
Support for gress filter-based encapsulation. For an outgoing packet matching the filter term, the packet is encapsulated inside an IP + GRE header as specified by the tunnel configuration. IP lookup is performed on the outer header and the packet is forwarded accordingly. The IP lookup for GRE-encapsulation-capable route is limited to the implicit default routing instance.
[See Understanding Filter-Based Tunneling Across IPv4 Networks.]
-
Support for configuring the output filter action with a nondefault routing instance or a specified routing instance.
-
Ingress filter-based de-encapsulation by using firewall filters for GRE and UDP tunnels
[See Configuring a Filter to De-Encapsulate GRE Traffic and decapsulate (Firewall Filter.]
Junos telemetry interface (JTI)
Junos telemetry interface (JTI) supports new platform sensors for the PTX10008. You can export platform-specific software and chassis component statistics using remote procedure call (gRPC) services, gRPC Network Management Interface (gNMI) services, and UDP transport. NewXpaths are added in the YANG data model.
[For a complete list of Xpaths supported by the device, see Junos YANG Data Model Explorer.]
Packet Forwarding Engine'sINT for PTX Series routers. The Junos OS Evolved Packet Forwarding Engine introduces a framework in the data plane, called inband network telemetry (INT), which collects and reports network state information without the intervention of the control plane. The header in the INT model has telemetry instructions that instruct an INT-capable device the state it must collect. The network state information is exported by the data plane either to the telemetry monitoring system or is written into the packet.
INT has source, transit, and sink support. The INT source embeds the INT metadata in the packet and the sink collects the metadata from the data packet for processing. We do not support INT source, sink, and all INT application modes on PTX10008 routers. The JNP10K-LC1301-36DD line card on PTX10008 supports only INT transit node in Junos OS Evolved Release 24.4R1. Among the three INT application modes INT-XD, INT-MX, and INT-MD, the JNP10K-LC1301 line card on PTX10008 supports only INT-MD mode and INT as a transit node.
The
set forwarding-optionsconfiguration command is updated with a newinband-telemetryoption, to enable or disable this feature.Layer 2 features
Support for Q-in-Q tunneling.
[See Configuring Q-in-Q Tunneling and VLAN Q-in-Q Tunneling and VLAN Translation.]
-
The PTX10008 router supports the following Layer 2 basic learning, bridging and flooding features:
-
Enterprise-style bridging (support both trunk and access mode)
-
Service provider-style bridging (also known as sub-interface mode)
-
BPDU block/filter
-
xSTP
-
Handle broadcast, unknown unicast and multicast (BUM) traffic, including split horizon
-
MAC learning and aging
-
Static MAC addresses
-
Trunk port and VLAN membership
-
802.1Q EtherType—8100
-
802.1Q VLAN tagging—Single tagging with normalized to bridge domain tag at ingress
-
Clearing all MAC address information
-
Global MAC limit
-
Global source MAC aging time
-
MAC moves
-
LACP and LLDP
-
Disabling MAC learning at global and interface level
-
Native VLAN ID for Layer 2 logical interfaces
-
Single VLAN-tagged Layer 2 logical interfaces
-
Interface statistics
Note: The router does not support theshow ethernet-switching statisticscommand and child logical interface statistics for aggregated Ethernet. -
Flexible Ethernet services
Note: Enterprise-style Layer 2 logical interfaces aren't allowed under theflexible-ethernet-servicesencapsulation. -
Virtual switch
-
Persistent MAC learning (sticky MAC)
-
Service provider bridging:
-
Multiple logical interfaces on the same physical interface that are part of the same bridge domain
-
Ethernet bridge encapsulation
[See Layer 2 Bridging, Address Learning, and Forwarding User Guide.]
-
-
Support for IRB:
-
All Layer 2 protocols already supported on the router Layer 3 protocols: BGP, IGMP, IS-IS, OSPF, PIM, and RIP Per-IRB logical interface MAC and statistics IRB Layer 3 multicast support with flooding only Address family support for IPv4 and IPv6, and support for IPv4 MTUs and IPv6 MTUs with different MTU values IRB interface in VRF routing instances Directed subnet broadcast support with IRB.
-
-
-
Support for interface MAC limit action. You can specify the action (
drop,drop and log,log, orshut down) that Junos OS Evolved takes when packets with new source MAC addresses are received after the MAC address limit is reached.[See Configuring MAC Limiting and packet-action.]
Layer 3 features
-
Support for 256-way ECMP. You can configure a maximum of 256 ECMP next hops for external BGP (EBGP) peers. This feature increases the number of direct BGP peer connections, which improves latency and optimizes data flow. However, we support 128 ECMP next hops for MPLS routes. Note that we do not support consistent load balancing (consistent hashing) for IPv4 or IPv6 with this feature.
[See Understanding BGP Multipath.]
-
Support for the following Layer 3 forwarding features for IPv4, IPv6, MPLS, LAG, ECMP, MTU checks, ICMP, OSPF, IS-IS, ARP, NDP, BGP, BFD, LACP, LDP, RSVP, LLDP, VRF-lite, TTL expiry, IP options, IP fragmentation, DDoS
-
BFD support, including:
-
Distributed BFD and BFD-triggered local repair (BFD authentication is not supported.)
-
Independent micro-BFD sessions enabled on a per-member link basis for a LAG bundle
-
Inline BFD
[See Understanding BFD.]
-
-
BGP flowspec signaling support. BGP can carry flow-specification network layer reachability information (NLRI) messages on PTX10008 devices with LC1201, LC1202 and LC1301 line cards.Propagating firewall filter information as part of BGP enables you to propagate firewall filters against denial-of-service (DOS) attacks dynamically across autonomous systems. The following match conditions are not supported:
-
ICMP codes alone [
inet/inet6] -
Source/destination prefix with offset for
inet6 -
Flow label for inet6 fragment (for
inet6)
Junos OS Evolved doesn't support the traffic marking action on this router. To configure flow routes statically, configure the match conditions and actions at the
[edit routing-options]hierarchy level. -
MACsec
Media Access Control Security (MACsec) is supported on physical interfaces.
Support for Media Access Control Security (MACsec) bounded delay protection.
Managing devices
Support for additional RPCs for the gNOI certificate management (cert) service. Junos OS Evolved supports the following gRPC Network Operations Interface (gNOI) cert service RPCs:
-
CanGenerateCSR()—Query if the target device can generate a certificate signing request (CSR) with the specified key type, key size, and certificate type.
-
RevokeCertificates()—Revoke certificates on the target device.
MPLS
-
We support the following MPLs features:
-
Support for MPLS FRR—MPLS fast reroute (FRR) provides faster convergence time (less than 50 milliseconds) for RSVP tunnels. The Routing Engine creates backup paths and the Packet Forwarding Engine installs the backup-path labels and next hops.
[See Fast Reroute Overview.]
-
Support for 256-way ECMP. You can configure a maximum of 256 equal-cost multipath (ECMP) next hops for external BGP (EBGP) peers. This feature increases the number of direct BGP peer connections, which improves latency and optimizes data flow. However, we support 128 ECMP next hops for MPLS routes. Note that we do not support consistent load balancing (consistent hashing) for IPv4 or IPv6 with this feature.
[See Understanding BGP Multipath.]
-
Support for MPLS features, including:
-
CLI support for monitoring MPLS label usage
-
Inline MPLS and IPv6 lookup for explicit null
-
32,000 transit LSPs
-
Explicit null support for MPLS LSPs
-
MPLS Label Block configuration
-
MPLS over untagged Layer 3 interfaces
-
MPLS OAM - LSP ping
-
JTI: OCST: MPLS operational state streaming (v2.2.0)
-
2000 ingress LSP support
-
2000 egress LSP support
-
Entropy label support
-
MPLS: JTI: Junos telemetry interface MPLS self-ping, TE++, and misc augmentation
-
Support for LDP, including:
-
Configurable label withdraw delay
-
Egress policy
-
Explicit null
-
Graceful restart signaling
-
IGP synchronization
-
Ingress policy
-
IPv6 for LDP transport session
-
Strict targeted hellos
-
Track IGP metric
-
Tunneling (LDP over RSVP)
-
-
RSVP++
-
Support for RSVP-TE, including:
-
Bypass LSP static configuration
-
Ingress LSP statistics in a file
-
RSVP-TE hitless-MBB with no artificial delays
-
32,000 transit LSPs
-
Automatic bandwidth
-
Class-based forwarding (CBF) with 16 classes
-
CBF with next-hop resolution
-
Convergence and scalability
-
Graceful restart signaling
-
JTI interface statistics and LSP event export
-
LSP next-hop policy
-
LSP self-ping
-
MPLS fast reroute (FRR)
-
MTU signaling
-
Optimize adaptive teardown
-
Node/link protection
-
Refresh reduction
-
Soft preemption
-
Shared Risk Link Group (SRLG)
-
-
Static LSPs with IPv4 next hop, IPv6 next hop, and IPv6 next hop with next-table support for bypass
-
Traffic engineering, including:
-
TE++: Dynamic ingress LSP splitting
-
Traffic engineering extensions (OSPF-TE and ISIS-TE)
-
Traffic engineering options
bgp,bgp-igp,bgp-igp-both-ribs, andmpls-forwarding
-
[See MPLS Applications User Guide .]
-
-
Segment routing support. You can configure the following Source Packet Routing in Networking (SPRING) or segment routing features on the router:
-
MPLS (segment routing using IS-IS):
-
Ping and traceroute for single IS-IS node or prefix segment
-
-
BGP Link State (BGP-LS):
-
Segment routing extensions for IS-IS
-
Segment routing extensions for OSPF
-
-
BGP:
-
Binding segment identifier (SID) for segment routing–traffic engineering (SR-TE)
-
Binding SID for SR-TE [draft-previdi-idr-segment-routing-te-policy]
-
Programmable routing protocol process APIs for SR-TE policy provisioning
-
Static SR-TE policy with mandatory color specification
-
Static SR-TE policy without color specification
-
-
IS-IS:
-
Adjacency SID
-
Advertising maximum link bandwidth and administrative color without RSVP-TE configuration
-
Anycast and prefix SIDs
-
Configurable segment routing global block (SRGB)
-
Node and link SIDs
-
Segment routing mapping server (SRMS) and client
-
Topology Independent Loop-Free Alternate (TI-LFA):
-
Link and node protection for IPv4 addressing (not required for IPv6 prefixes)
-
Link and node protection for IPv4 addressing (required for IPv6 prefixes)
-
Protection for SRMS prefixes
-
-
-
OSPF:
-
Advertising maximum-link bandwidth and administrative color without RSVP-TE configuration
-
Anycast SID
-
Configurable SRGB
-
Inter-area support
-
Node and link SID
-
Prefix SID
-
Segment routing mapping server (SRMS) and client
-
Static adjacency SID
-
TI-LFA:
-
Link and node protection
-
Protection for SRMS prefixes
-
MPLS ping and traceroute for single OSPF node or prefix segment
-
IGP adjacency SID hold time
-
Path Computation Element Protocol (PCEP) for segment routing LSPs
-
-
-
BGP IPv4 labeled-unicast resolution over:
-
BGP IPv4 SR-TE with IPv4 segment routing using IS-IS and OSPF
-
Noncolored IPv4 SR-TE with segment routing using IS-IS and OSPF
-
Static colored IPv4 SR-TE with segment routing using IS-IS and OSPF
-
-
• BGP Layer 3 VPN over:
-
Colored SR-TE tunnels and IPv4 protocol next hops
-
Non-colored SR-TE tunnels and IPv4 protocol next hops
-
-
BGP-triggered dynamic SR-TE colored tunnels
-
Class-based forwarding and forwarding table policy LSP next-hop selection among noncolored SR-TE LSPs
-
First-hop label support for SID instead of an IP address
-
Path specification using router IP addresses (segment routing segment list path ERO support using IP address as next hop and loose mode)
-
SR-TE color mode:
-
00—Route resolution fallback to IGP path
-
01—Route resolution fallback to color only null routes
-
-
Static LSPs with member-link next hops for aggregated Ethernet bundles (also known as adjacent SID per LAG bundle or aggregated Ethernet member link)
[See Understanding Source Packet Routing in Networking (SPRING).]
-
-
-
Support for Layer 2 VPN features, includeing:
-
Transport of Layer 2 frames over MPLS (LDP signaling)
-
Layer 2 VPNs over tunnels (BGP signaling)
-
Simple Ethernet and VLAN-based cross-connect (also known as connections)
-
Local and remote switching
-
Ethernet and VLAN CCC
-
Single-tagged CCC logical interfaces
-
Control word
-
Regular and aggregated Ethernet interfaces
-
Layer 2 protocol pass-through
-
Layer 2 circuit backup interface and backup neighbor
-
Layer 2 circuit statistics and CoS
-
VCCV with type 2 and type 3
[See Layer 2 VPNs and VPLS User Guide for Routing Devices and TCC Overview.]
-
-
VLAN ID lists for Layer 2 Circuits. VLAN ID lists allow you to link multiple VLAN IDs to a single logical interface for Layer 2 traffic.
[See vlan-id-list (Ethernet VLAN Circuit), vlan-id-list, and Configuring VLAN Identifiers for VLANs and VPLS Routing Instances.]
-
MPLS-based Layer 3 VPNs support includes:
-
MPLS over Layer 3 VLAN-tagged subinterfaces
-
Per-next-hop label allocation
-
Mapping of the label-switched interface (LSI) logical interface label to the VPN routing and forwarding (VRF) routing table using the
vrf-table-labelstatement -
ICMP tunneling and MPLS traceroute
-
Disabling time-to-live (TTL) decrementing using
no-propagate-ttl
-
-
Support for IP-over-IP encapsulation to facilitate IP overlay construction over an IP transport network. An IP network contains edge devices and core devices. To achieve higher scale and reliability among these devices, use an overlay encapsulation to logically isolate the core network from the external network that the edge devices interact with.
Static configuration or a BGP protocol configuration is used to distribute routes and signal dynamic tunnels. The dynamic-tunnels configuration creates IP-over-IP encapsulation-only tunnels in the Packet Forwarding Engine.
The following are not supported:
-
Dynamic tunnel de-encapsulation operation
-
Next-hop-based statistics for dynamic tunnels
-
IP fragmentation at tunnel start point and path MTU discovery for IPv4/IPv6
[See Next-Hop-Based Dynamic Tunneling Using IP-Over-IP Encapsulation .]
-
-
Redistribution of IPv4 routes with IPv6 next hop into BGP. Devices can forward IPv4 traffic over an IPv6-only network, which generally cannot forward IPv4 traffic.
[See Understanding Redistribution of IPv4 Routes with IPv6 Next Hop into BGP.]
-
Link delay advertisement. You can get the measurement of various performance metrics in IP networks, which helps to distribute network-performance information in a scalable fashion.
[See How to Enable Link Delay Measurement and Advertising in IS-IS.]
Multicast
-
Support for multicast-only fast reroute (MoFRR) for both IPv4 and IPv6 traffic flows. MoFRR is supported for PIM sparse mode (SM) and source-specific multicast (SSM) modes only. Support does not extend to Multipoint LDP-based MoFRR.
-
Bidirectional Protocol Independent Multicast for multicast traffic.
[See pim-snooping.]
-
Support for RSVP-based and LDP-based point-to-multipoint (P2MP) LSPs with graceful restart. In addition, the router supports IP unicast traffic in a label-edge router (LER) role and both IP unicast and multicast traffic in a label-switching router (LSR) role.
-
Support for MPLS features P2MP ping and P2MP LSPs traceroute. MPLS ping and traceroute provide the mechanism to detect data-plane failure and isolate faults in the MPLS network. The traceroute or ping is initiated to validate LSP paths on P2MP.
[See MPLS Applications User Guide.]
-
Optimized fast branch updates. The method of making fast branch updates to a multicast replication tree has been refined. Now, any membership changes in the tree trigger fast make-before-break (FMBB) re-optimization of the tree and ensure that there is no traffic loss.
[See Multicast Shortest-Path Tree.]
-
Multicast support for next-generation MVPN including IR, RSVP-P2MP, and LDP-P2MP provider tunnel, inclusive and selective PMSI tunnel, rendezvous-point tree (RPT)-shortest-path tree (SPT) mode, turnaround provider edge (PE) device, rendezvous point (RP) mechanisms such as auto-RP, bootstrap router (BSR), and embedded RP.
[See Multiprotocol BGP MVPNs Overview, Understanding Next-Generation MVPN Concepts, and Understanding Next-Generation MVPN Control Plane.]
-
Multicast support for next-generation MVPN including IR, RSVP-P2MP, and LDP-P2MP provider tunnel, inclusive and Selective PMSI tunnel, Rendezvous-point tree (RPT)-shortest-path tree (SPT) mode, turnaround provider edge (PE) device, RP mechanisms such as auto rendezvous point (RP), bootstrap router (BSR), and embedded RP.
[See Multiprotocol BGP MVPNs Overview, Understanding Next-Generation MVPN Concepts, and Understanding Next-Generation MVPN Control Plane.]
-
MVPN BIER with MPLS encapsulation. Junos OS Evolved supports the Bit Index Explicit Replication (BIER) architecture to simplify control and forwarding planes by eliminating the need for multicast trees and per-flow states. With BGP-MVPN as an overlay, you can configure BIER-enabled provider tunnels for multicast VPNs.
[See BIER Overview and bier.]
-
IS-IS as routing underlay for BIER. Junos OS Evolved supports the advertisement of BIER information of one or more BIER subdomains using IS-IS as the IGP underlay. Key BIER information such as BFR IDs and BFR prefixes in each subdomain are flooded through the IS-IS domain to generate the BIER forwarding table.
[See IS-IS Extension for BIER and bier-sub-domain (Protocols IS-IS).]
Network management and monitoring
-
Local and remote port mirroring:
-
Local port mirroring is used to copy the packet entering or leaving the system or port and send sampled packet through a predesignated port provided by configuration to remote devices or servers. Applications running on servers can analyze these packets and use the results based on the requirement.
-
Remote port mirroring is used to send a sampled packet to a remote destination provided by configuration. The packet is encapsulated in a GRE header. Remote port mirroring makes use of the flexible tunnel interface (FTI) to encapsulate and send the packets out of the box. This feature also provides an option for configuring a policer for the given instance, so that the rate of sampling can be policed.
-
-
Port mirroring support for EVPN-VXLAN
-
Filter and mirror ingress and egress traffic on any network port to CPU. Junos devices support filtering and mirroring of incoming and outgoing packets, sending those packets to the CPU, and saving them to a file. This on-device packet capture feature can help you with protocol and application analysis, debugging, troubleshooting, network forensics, audit trails, and network attack detection. On-device packet capture (or “self-mirroring”) sends the sampled copy to a CPU and writes the copy into a packet capture (.pcap) file. The process does not require you to use any device connected to your network device.
[See On-Device Packet Capture.]
-
Support for the sFlow technology, which is a monitoring technology for high-speed switched or routed networks. The sFlow monitoring technology randomly samples network packets and sends the samples to a monitoring station.
[See Understanding How to Use sFlow Technology for Network Monitoring.]
Support for additional RPCs for the gNOI certificate management (cert) service. Junos OS Evolved supports the following gRPC Network Operations Interface (gNOI) cert service RPCs:
-
CanGenerateCSR() —Query if the target device can generate a certificate signing request (CSR) with the specified key type, key size, and certificate type.
-
RevokeCertificates()—Revoke certificates on the target device.
-
Up maintenance association end points (MEPs) in distributed periodic packet management (PPM)
-
Distributed Y.1731 on synthetic loss measurement (SLM), delay measurement (DM), and loss measurement (LM)
-
Down MEPs on bridges, circuit cross-connect (CCC) , and EVPN
-
Distributed session support for CFM on aggregated Ethernet
-
Enhanced CFM mode
-
IPv4 (inet) support for Data Model (DM) and synthetic loss message (SLM)
-
Action profile for marking a link down, except for EVPN and bridge up MEP
-
LM colorless mode
-
DM and LM on aggregated Ethernet if all active child links are on the same Packet Forwarding Engine
-
Supported CFM protocol data units (PDUs), as follows:
-
Continuity check messages (CCM)
-
LBM
-
LBR
-
Link Trace Message (LTM)
-
Link Trace Reply (LTR)
-
Delay measurement message (DMM)
-
Delay measurement reply (DMR)
-
LMM
-
LMR
-
Synthetic loss message (SLM)
-
Synthetic loss reply (SLR)
-
-
Enterprise and service provider configurations
-
VLAN normalization
-
VLAN transparency for CFM PDUs
-
CoS forwarding class (FC) and CoS packet loss priority (PLP) for CFM
-
CFM session on child physical interface in distributed mode
-
SNMP
-
Chassis ID or Send ID type, length, and value
-
Trunk mode
-
Maintenance association intermediate point (MIP)
Platform and infrastructure
Support for Synchronous Ethernet timing, Synchronous Ethernet over LAG, and Timing SNMP and MIB (SYNCE).
Platform resiliency support. PTX10008 routers with specific line cards support platform resiliency. Resiliency enables the router to handle failures and faults related to the hardware components such as line cards, switch fabric, Control Boards, fan trays, fan tray controllers, and power supply units. Fault handling includes detecting and logging the error, raising alarms, sending SNMP traps, providing indication about the error through LEDs, self-healing, and taking components out of service.
[See show system errors active.]
Support for G.8273.2 and G.8275.1 profiles, hybrid mode with PTPoE (PTPoE and Synchronous Ethernet), one-step timestamping mode, and PTPoE support over LAG interoperability with child links spread across PTX10K-LC1301-36DD, PTX10K-LC1201-36CD, and PTX10000-LC1202-36MR line cards.
[See Precision Time Protocol (PTP) Overview and PTP over Ethernet Overview.]
Platform resiliency for PTX10K-LC1301-36DD . The PTX10K-LC1301-36DD line card supports platform resiliency. Resiliency includes handling faults pertaining to the line card hardware and transceivers. Fault handling includes detecting and logging the error, raising alarms, sending SNMP traps, providing indication about the error through LEDs, self-healing, and taking components out of service.
[See show system errors active.]
Segment routing
-
-
Support for SRv6 network programming in IS-IS. Use this feature to configure segment routing in a core IPv6 network without an MPLS data plane.
-
To enable SRv6 network programming in an IPv6 domain, include the
srv6statement at the[edit protocols isis sourcepacket- routing]hierarchy level. -
To advertise the Segment Routing Header (SRH) locator with a mapped flexible algorithm, include the
algorithmstatement at the[edit protocols isis source-packet-routing srv6 locator]hierarchy level. -
To configure a Topology Independent Loop-Free Alternate (TI-LFA) backup path for SRv6 in an IS-IS network, include the
transitsrh- insertstatement at the[edit protocols isis sourcepacket- routing srv6]hierarchy level.
-
See How to Enable SRv6 Network Programming in IS-IS Networks.]
-
-
Support for SRv6 network programming and Layer 3 services over SRv6 in BGP. You can configure BGP-based Layer 3 services over an SRv6 core. You can enable Layer 3 overlay services with BGP as the control plane and SRv6 as the data plane. SRv6 network programming provides flexibility to leverage segment routing without deploying MPLS.
[See Understanding SRv6 Network Programming and Layer 3 Services over SRv6 in BGP.]
-
Operations, Administration and Management (OAM) ping support for segment routing with IPv6 (SRv6) network programming. You can perform an OAM ping operation for any SRv6 segment identifier (SID) whose behavior allows upper layer header processing for an applicable OAM payload. As segment routing with IPv6 data plane (SRv6) adds only the new Type 4 routing extension header, you can use the existing ICMPv6-based ping mechanisms for an SRv6 network to provide OAM support for SRv6. Ping with O-Flag (segment header) is not supported.
[See ITU-T Y.1731 Ethernet Service OAM Overview and How to Enable SRv6 Network Programming in IS-IS Networks.]
-
Support for SRv6 traceroute. We support the traceroute mechanism for segment routing for IPv6 (SRv6) segment identifiers. You can use traceroute for both UDP and ICMP probes. By default, traceroute uses UDP probes. For ICMP probes, use the
traceroutecommand with theprobe-icmpoption.[See How to Enable SRv6 Network Programming in IS-IS Networks.]
-
SRv6 support for static SR-TE policy. You can configure static segment routing–traffic engineering (SR-TE) tunnels over an SRv6 data plane. Use the following configuration commands to enable SRv6 support:
-
For an SR-TE policy:
set protocols source-packet-routing srv6 -
For an SR-TE tunnel:
set protocols source-packet-routing source-routing-path lsp name srv6 -
For an SR-TE segment list:
set protocols source-packet-routing source-routing-path segment-list srv6
-
Support for SRv6 micro-SIDs (uSIDs). You can compress multiple SRv6 addresses into a single IPv6 address (uSID).
[See Micro SID support in SRv6, micro-sid, and block.]
Services applications
-
Inline monitoring services support for packet mirroring with metadata.
-
Hardware-based IPFIX export for inline monitoring services.
-
Juniper Resiliency Interface (JRI) support.
[See Juniper Resiliency Interface.]
-
HTTP and TCP probe types for RPM. You can configure the http-get, http-metadata-get, and tcp-ping probe types for real-time performance monitoring (RPM) probes. You must configure the offload-type none statement to be able to commit the configuration.
[See probe-server, probe-type, and rpm.]
-
Inline active flow monitoring support, including support for egress sampling, for multiple BGP next-hop support, and for MPLS, MPLS-IPv4, and MPLS-IPv6 templates.
Software installation and upgrade
-
Support for secure zero-touch provisioning (SZTP).
-
Support for ZTP using WAN interfaces.
[See See Zero Touch Provisioning.]
Firmware upgrade support.
Additional feature support
Firewall filter support.
[See Firewall filter support.]
Policer and policer overhead interoperability.
[See Routing Policies, Firewall Filters, and Traffic Policers User Guide.]
-