Data Center Networking

Build data center spine-and-leaf networks with solutions providing industry-leading throughput and scalability, an extensive routing stack, the open programmability of the Junos OS, and a broad set of EVPN-VXLAN and IP fabric capabilities.

Rethink data center operations and fabric management with turnkey Juniper Apstra software in your data center environment. Automate the entire network lifecycle to simplify design and deployment and provide closed-loop validation. With Apstra, customers have achieved 90% faster time to delivery, 70% faster time to resolution, and 80% OpEx reduction.

Shot of Asian IT Specialist Using Laptop in Data Center Full of Rack Servers. Concept of High Speed Internet with Blue Neon Visualization Projection of Binary Data Transfer
Show filters
Filtered By

Features

Features

10G Copper
2
Cloud Metro
1
DC Spine
1
DCI - Cloud Edge
3
Data Center Leaf Spine
1
Deep Buffer
2
MACSec
2
Metro Agg
1
Mist AI
3
Modular
2
Peering
3
WAN Core
3

Role

Role

Leaf Top-of-rack
8
Spine
9
10G Copper
Deep Buffer
MACSec
Mist AI
Modular
Leaf Top-of-rack
Spine

The QFX5240 line offers up to 800GbE interfaces to support AI Data Center Networking deployments with AI/ML workloads and other high-speed, high-density, spine-and- leaf IP fabrics where scalability, performance and low latency are critical.

Use Cases: AI data center leaf/spine, Data center fabric leaf/spine/super spine

Port Density:

  • QFX5240-64QD: 64 ports of QSFP-DD 800GbE. Can use breakout for 128 × 400GbE, or 256 × 100GbE
  • QFX5240-64OD: 64 ports of OSFP 800GbE. Can use breakout for 128 × 400GbE, 256 × 100GbE, or 256 x 50GbE

Throughput: Up to 102.4 Tbps (bidirectional)

Spine
Leaf Top-of-rack

The QFX5700 line supports very large, dense, and fast 400GbE IP fabrics based on proven Internet-scale technology. With 10/25/40/50/100/200/GbE interface options, the QFX5700 is an optimal choice for spine-and-leaf deployments in enterprise, high-performance computing, service provider, and cloud provider data centers.

Use Cases: Data Center Fabric Spine, EVPN-VXLAN Fabric, Data Center Interconnect (DCI) Border, Secure DCI, Multitier Campus, Campus Fabric

Port Density:

  • 32 x 400GbE QSFP56-DD
  • 64 x 200GbE
  • 128 x 100GbE
  • 144 x 50/40/25/10GbE

Throughput: Up to 25.6 Tbps (bidirectional)

Spine

The QFX5230 Switch offers up to 400GbE interfaces to support high-speed, high-density, spine-and-leaf IP fabrics. Prime use cases include AI data center networks with AI/ML workloads where scalability, performance, and low latency are critical.

Use Cases: Data center fabric spine, super spine (including AI data center networking), IP storage networking, and edge/data center interconnect (DCI)

Port density:

  • 64 ports of QSFP56-DD 400GbE. Can use breakout for 128 × 200GbE, 256 × 100GbE, 64 × 40GbE, 256 × 25GbE, or 256 × 10GbE

Throughput: Up to 25.6 Tbps (bidirectional)

Spine
Leaf Top-of-rack

The QFX5130 line offers high-density, cost-optimized, 1-U, 400GbE and 100GbE fixed-configuration switches based on the Broadcom Trident 4 processor. It’s ideal for environments where cloud services are being added. With 10/25/40/100/400GbE interface options, the QFX5130 is an optimal choice for spine-and-leaf deployments in enterprise, service provider, and cloud provider environments.

Use Cases: Data Center Fabric Spine
Campus Distribution/Core

Port Density:

  • QFX5130-32CD/QFX5130E-32CD: 32 x 400GbE QSFP-DD/QSFP+/QSFP28 and 2 x 10GbE SFP+
    Throughput: Up to 25.6 Tbps (bidirectional)
  • QFX5130-48C: 48 x 100GbE SFP56-DD and 8 x 400GbE QSFP-DD
    Throughput: Up to 16 Tbps (bidirectional)
WAN Core
Peering
DCI - Cloud Edge
Core Routing
DC Spine

The modular PTX10004, PTX10008, and PTX10016 Packet Transport Routers directly address the massive bandwidth demands placed on networks today and in the foreseeable future. They bring ultra-high port density, native 400GE and 800GE inline MACsec, and latest generation ASIC investment to the most demanding WAN and data center architectures.

  • 28.8Tbps capacity per line card
  • 10M FIB, 100K+ SR tunnels
  • SRv6, BIER, HQoS, INT-MD
  • Native 400G and 800G inline MAC-sec
  • Ultra-high 400GE and 800GE port density
Cloud Metro
Metro Agg
Data Center Leaf Spine

The ACX7100, part of the ACX7000 family, delivers high density and performance in a compact, 1 U footprint. Ideal for service provider, large enterprise, wholesale, and data center applications, it helps operators deliver premium customer and user experiences.

Form factor

Compact 1 U, with 59.49-cm depth

Throughput

Up to 4.8 Tbps

Port density

ACX7100-48L: 48 x 10/25/50GbE

6 x 400GbE

 

ACX7100-32C: 32 x 40/100GbE

4 x 400GbE

WAN Core
Peering
DCI - Cloud Edge
Core Routing

The PTX10003 Packet Transport Router offers on-demand scalability for critical core and peering functions. With high-density 100GbE, 200GbE, and 400GbE ports, operators can meet high-volume demands with efficiency, programmability, and performance at scale.

  • High-density platform
  • 100GbE and 400GbE interfaces
  • Compact 3 U form factor
  • 100GbE inline MACsec on all ports
10G Copper
Mist AI
Spine
Leaf Top-of-rack

The QFX5120 line offers 1/10/25/40/100GbE switches designed for data center, data center edge, data center interconnect and campus deployments with requirements for low-latency Layer 2/Layer 3 features and advanced EVPN-VXLAN capabilities.

Use Case: Data Center Fabric Leaf/Spine, Campus Distribution/Core,  applications requiring MACsec

Port Density:

  • QFX5120-48T: 48 x 1/10GbE RJ-45 and 6 x 40/100GbE QSFP+/QSFP28
  • QFX5120-48Y: 48 x 1/10/25GbE SFP/SFP+ and 8 x 40/100GbE QSFP+/QSFP28
  • QFX5120-48YM: 48 x 1/10/25GbE SFP/SFP+ and 8 x 40/100GbE QSFP+/QSFP28
  • QFX5120-32C: 32 x 40/100GbE QSFP+/QSFP28 and 2 x 10GbE SFP+

Throughput: Up to 2.16/4/6.4 Tbps (bidirectional)

MACsec: AES-256 encryption on all ports (QFX5120-48YM)

Mist AI
Leaf Top-of-rack

QFX5110 10/40GbE switches offer flexible deployment options and rich automation features for data center and campus deployments that require low-latency Layer 2/Layer 3 features and advanced EVPN-VXLAN capabilities. The QFX5110 provides universal building blocks for industry-standard architectures such as spine-and-leaf fabrics.

Use Case: Data Center Fabric Leaf

Port Density:

  • QFX5110-48S: 48 x 1/10GbE SFP/SFP+ and 4 x 40/100GbE QSFP+/QSFP28
  • QFX5110-32Q: 32 x 40GbE QSFP+; 20 x 40GbE QSFP+ and 4 x 100GbE QSFP28

Throughput: Up to 1.76/2.56 Tbps

 

WAN Core
Peering
DCI - Cloud Edge
Core Routing

The PTX10001-36MR is a high-capacity, space- and power-optimized routing and switching platform. It delivers 9.6 Tbps of throughput and 10.8 Tbps of I/O capacity in a 1 U, fixed form factor. Based on the Juniper Express 4 ASIC, the platform provides dense 100GbE and 400GbE connectivity for highly scalable routing and switching in cloud, service provider, and enterprise networks and data centers.

  • 400GbE inline MACsec
  • 4th generation silicon
  • Scale up and scale out
  • 9.6-Tbps forwarding capacity
Spine
Leaf Top-of-rack

The QFX5220 line offers up to 400GbE interfaces for very large, dense, and fast standards-based fabrics based on proven internet-scale technology. QFX5220 switches are an optimal choice for spine-and-leaf data center fabric deployments as well as metro use cases.

Use Case: Data Center Fabric Spine

Port Density:

  • QFX5220-32CD: 32 x 40/100/400GbE QSFP56-DD and 2 x 10GbE SFP+
  • QFX5220-128C: 128 x 100GbE QSFP28 and 2 x 10GbE SFP+

Throughput: Up to 25.6 Tbps (bidirectional)

 

Spine
Leaf Top-of-rack

The QFX5210 line offers line-rate, low-latency 10/25/40/100GbE switches for building large, standards-based fabrics. QFX5210 Switches are an optimal choice for spine-and-leaf data center fabric deployments as well as metro use cases.

Use Case: DC fabric leaf/spine

Port Density:

  • QFX5210-64C/-S: 64 x 40/100GbE QSFP+/QSFP28

Throughput: Up to 12.8 Tbps (bidirectional)

SONiC: ONIE and SONiC images preinstalled on QFX5210-64C-S

 

 

Spine
Leaf Top-of-rack

The QFX5200 line offers line-rate, low-latency 10/25/40/50/100GbE switches for building large, standards-based fabrics. QFX5200 Switches are an optimal choice for spine-and-leaf fabric deployments in the data center as well as metro use cases.

Use Case: Data Center Fabric Leaf/Spine

Port Density:

  • QFX5200-48Y: 48 x 10/25GbE SFP+/SFP28 and 6 x 40/100GbE QSFP+/QSFP28
  • QFX5200-32C/-S: 32 x 40/100GbE QSFP+/QSFP28

Throughput: Up to 3.6/6.4 Tbps (bidirectional)

SONiC: ONIE and SONiC images preinstalled on QFX5200-32C-S

 

MACSec
Deep Buffer
Modular
Spine

QFX10008 and QFX10016 Switches support the most demanding data center, campus, and routing environments. With our custom silicon Q5 ASICs, deep buffers, and up to 96 Tbps throughput, these switches deliver flexibility and capacity for long-term investment protection.

Use Case: Data Center Fabric Spine

Chassis Options:

  • QFX10008: 13 U chassis, up to 8 line cards
  • QFX10016: 21 U chassis, up to 16 line cards

Port Density:

  • QFX10000-30C-M: 30 x 100/40GbE, MACsec-enabled
  • QFX10000-30C: 30 x 100/40GbE
  • QFX10000-36Q: 36 x 40GbE or 12 x 100GbE
  • QFX10000-60S-6Q: 60 x 1/10GbE with 6 x 40GbE or 2 x 100GbE
  • QFX10000-12C-DWDM: 6 x 200Gbps

Throughput: Up to 96 Tbps (bidirectional)
Deep Buffers: 100 ms per port
Precision Time Protocol (PTP)

CASE STUDY

Data Consolidation and Modernization Bolster T‑Systems’ IT Service Delivery

T‑Systems offers a wide range of digital services in 20 countries. The provider ran 45 global data centers and decided to consolidate and modernize them to simplify operations and meet new performance, availability, and scalability requirements. 

T-Systems Image
Discovery Tool

Get the answers to your data center challenges

Get instant, data-backed expert recommendations by answering a few questions in our Data Center Discovery Tool. Share the business case with your team, watch a demo, or call us to learn more. 

Try it. Right now.

Get hands-on with our IP-EVPN fabric solutions - for free!

Live Events and On-Demand Demos

Explore the journey to a transformed network.