Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

JVD Validation Framework

Platforms / Devices Under Test (DUT) on this JVD

Table 36: Platforms / Devices Under Test (DUT)

Component Frontend Storage Backend GPU Backend (Cluster 1 and 2)
Architecture 3-stage clos 3-stage clos 3-stage clos rail optimized
Spine nodes QFX5130-32CD x 2 QFX5220-32CD x 2 QFX5240-64OD x 4
Leaf nodes QFX5130-32CD x 2 QFX5220-32CD x 2 QFX5240-64CD x 8

Leaf node <=>

spine node links

2 x 400GE x 2

Each leaf node is connected

to 2 spine nodes.

2 x 400GE x 2 or

3 x 400GE x 2

Each leaf node is connected to

2 spine nodes.

2 x 400GE x 4

Each leaf node is connected to

4 spines nodes

AMD MI300X (1)

GPU server <=>

leaf node links

1 x 100GE x 1

Each GPU server is connected

to 1 leaf node

(mgmt_eth interface)

1 x 200GE x 2

Each GPU server is connected

to 2 leaf nodes

(stor0_eth & stor1_eth interface)

1 x 400GE x 8

Each GPU server is connected

to 8 leaf nodes; 1 connection per GPU

(gpu0_eth to gpu7_eth interfaces)

Vast storage servers <=>

storage leaf nodes links (2)

N/A

1 x 100GE x 2

(per Vast D-node <=>

leaf node connection)

N/A
  1. Supermicro AS-8125GS-TNMR2 Dual AMD EPYC 8U GPU SuperServer x 2 &

    Dell PowerEdge XE9680 6U GPU Rack Server x 2

  2. VAST Data Ceres Platform Storage (Dbox) & VAST Data Quad Server Chassis (Cbox) - QUAD-4N-IL (DELL) x 2
Note:

QFX5220-64CD, and QFX5230-64CD acting as leaf nodes, as well as QFX5230-64CD and PTX10008 acting as spine nodes are covered in AI Data Center Network with Juniper Apstra, NVIDIA GPUs, and WEKA Storage—Juniper Validated Design (JVD). The same document also covers WEKA storage and NVIDIA GPUs servers.