Additional Features Optimized for AI-ML Fabrics
For more information about features optimized for AI-ML fabrics, see the AI-ML Data Center Feature Guide.
-
BGP Support for Global Load Balancing in DC Fabric (QFX5240)— In a DC fabric, hashing is unable to ensure even load distribution over all ECMP links, which might result in congestions on certain links and underutilization on other links. Dynamic load balancing helps to avoid congested links to mitigate local congestion. However, dynamic load balancing cannot address some congestions. For example, AI ML traffic that has elephant flows and lacks entrophy causes congestions in the fabric. In this case, global load balancing (GLB) helps to mitigate these congestions. Global load balancing is hashing a route with multiple ECMP links onto several links for load balancing.
In a CLOS network the congestions on the first two next hops impacts the load balancing decisions of the local node and the previous hop nodes triggering global load balancing. If the route has only one next-next-hop, a simple path quality profile is created. If the route has more than one next next-hop node then a simple path quality profile is created for each next next-hop node.
To enable global load balancing, include the
global-load-balancing
statement at the[edit protocols bgp]
hierarchy level. We have disabled this statement by default.[See global-load-balancing.]
-
Configurable FlowSet table in DLB flowlet mode (QFX5130-32CD, QFX5130E-32CD, QFX5130-48C, QFX5130-48CM, QFX5220, QFX5230-64CD, QFX5240-64OD, QFX5240-64QD, QFX5700, and QFX5700E)—Dynamic load balancing (DLB) uses the FlowSet table to determine the egress interface of flows. The table holds 32,768 entries, distributed among 128 DLB equal-cost multipath (ECMP) groups. By default, each ECMP group receives 256 entries. You can modify this distribution to accommodate more flows per ECMP group, thereby enhancing flow distribution. [See Configure Flowset Table Size in DLB Flowlet Mode.]
-
Reactive path rebalancing (QFX5240-64OD and QFX5240-64QD)—Use the enhanced flowlet mode in dynamic load balancing (DLB) to configure an inactivity interval for traffic on an outgoing interface. If the outgoing link quality deteriorates over time without exceeding the inactivity timer, reassign the traffic to a better quality link within the flowlet mode. This approach overcomes the limitations of the classic flowlet mode and ensures optimal traffic distribution. [See Reactive Path Rebalancing.]
-
SNMP support for PFC, ECN, and CoS ingress packet drop accounting (QFX5230-64CD, QFX5240-64OD, and QFX5240-64QD)—We have introduced SNMP support that helps to account for the packets that are dropped because of ingress port congestion. You can view and export the error counters data for ECN, ingress drops, and PFC using the following commands:
-
Show snmp mib walk ifJnxTable
-
Show snmp mib walk jnxCosPfcPriorityTable
[See show snmp mib and SNMP MIBs Supported by Junos OS and Junos OS Evolved.]
-
-
Extended sFlow functionality (QFX5230-64CD, QFX5240-64OD, and QFX5240-64QD)—We have extended the sFlow monitoring functionality to support the export of sFlow sample packets through the mgmt_junos interface and non-default VRF WAN ports.
The management Ethernet interface provides the out-of-band management network by default. Deploying the mgmt_junos VRF instance ensures management traffic uses private IPv4 and IPv6 routing tables. The new
routing-instance
option at the[edit protocol sflow collector]
hierarchy specifies the routing instance name.sFlow can now export sample packets through non-default VRF WAN ports, which allows it to sample traffic on configured ports based on sample rate and port information.
The sFlow system comprises an agent embedded in the device and up to four external collectors. With these updates, collectors can be spread across different VRFs, and the software forwarding infrastructure daemon (SFID) determines the correct next-hop address for collector IPs, ensuring proper routing.
The
show sflow collector detail
command now displays the additional field “Routing Instance Name” to indicate the VRF name on which collector is reachable and “Routing Instance Id” that is corresponding to that VRF.[See collector, show sflow collector, and System Logging and Routing Instances.]
-
Remote port mirroring to IPv4/IPv6 address (GRE encapsulation) with DSCP, source-address, and rate-limiting parameters (QFX5230-64CD, QFX5240-64OD, and QFX5240-64QD)—You can configure DSCP, source-address, and rate-limiting parameters in your configuration for remote port mirroring to IPv4 or IPv6 addresses. You use remote port mirroring to copy packets entering a port or VLAN and send the copies to the IPv4 or IPv6 address of a device running an analyzer application on a remote network (sometimes referred to as “extended remote port mirroring”). The mirrored packets are GRE-encapsulated.
You configure
source-address
orsource-ipv6-address
,dscp
, and forwarding-class options—either in the analyzer configuration or the port-mirroring configuration—under these hierarchies, respectively:-
[edit forwarding-options analyzer instance instance-name output]
-
[edit forwarding-options port-mirroring instance instance-name family inet|inet6 output]
You configure the forwarding class and the shaping-rate option under the
class-of-service
hierarchy, as follows:-
set class-of-service forwarding-classes class class-name queue-num queue-number
-
set class-of-service interfaces interface-name scheduler-map map-name
-
set class-of-service scheduler-maps map-name forwarding-class class-name scheduler scheduler-name
-
set class-of-service schedulers scheduler-name shaping-rate rate
[See Port Mirroring and Analyzers.]
-