Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Key Characteristics of Implementation

    The MetaFabric 1.0 solution was verified in the Juniper Networks solution validation labs using the following set of design elements and features:

    • Transport: MC-LAG Active/Active with VRRP and SRX JSRP, IRB and VLAN, Lossless Ethernet
    • Protocols: OSPF, BGP
    • High availability: NSSU, ISSU, SRX Cluster
    • Security: Perimeter - SRX3600, App Security - Firefly Host
    • Remote access: SA network and VM appliance configuration
    • OOB: EX4300-VC
    • Compute and virtualization: IBM Flex chassis, VMware 5.1, vCenter
    • Network management: Juno Space, Network Director 1.5
    • Application load balancer: F5 DSR LB implementation
    • Quality of service: Lossless Ethernet, PFC, DCBX
    • Scale and performance: SharePoint, Exchange, Wikimedia scale with Shenick Tester
    • POD1 and POD2 are configured with the QFX3000-M QFabric system as an access and aggregation switch.

    POD1 (QFX3000-M QFabric) Configuration

    The POD1 Juniper Networks® QFX3000-M QFabric System is configured with the following elements:

    • Three redundant server node groups (RSNG) connected to two IBM Flex blade servers
    • IBM Flex-1 has 40-Gigabit CNA connected to an RSNG with QFX3600 nodes (RSNG4)
    • IBM-Flex-2 has 10-Gigabit pass-thru modules connected to RSNG2 and RSNG3
    • EMC VNX storage is connected to the QFabric for storage access though iSCSI and NFS
    • QFX3000-M QFabric system is also configured with one network node group (NNG) with two nodes connected to EX9214 core-switch using 4 X 24port link aggregation groups (LAGs) configured as trunk ports
    • POD1 (QFabric NNG toward EX9214): Area10 (totally stubby area)

    POD2 (QFX3000-M QFabric) Configuration

    The Juniper Networks® QFX3000-M QFabric System deployed in POD2 is configured with the following elements:

    • Three RSNG connected to two IBM Flex blade servers
    • IBM IBM-Flex-2 has 10-Gigabit pass-thru modules connected to RSNG2 and RSNG3
    • EMC VNX storage is connected to the QFabric for storage access though iSCSI and NFS
    • QFX3000-M QFabric system is also configured with one NNG node-group with two nodes connected to EX9214 core-switch using 4 X 32 port LAGs configured as trunk ports
    • POD2 (QFabric NNG toward EX9214): Area11 (totally stubby area)

    Core Switch (EX9214) Implementation

    The core role deployed in the solution verification lab features the Juniper Networks® EX9214 Ethernet Switch with the following configuration elements:

    • Layer 2 MC-LAG Active/Active is configured on EX9214 toward QFabric-m, F5 LB, MX240 toward SRX3600
    • IRB is configured on EX9214 and QFabric and QFX-VC to terminate the Layer 2/Layer 3 boundary
    • Static route is configured to core-switch the traffic to load balancer (LB) from Internet
    • OSPF is configured to send only default routes to the NSSA areas toward POD1 and POD2
    • IRB and VRRP are configured for all MC-LAG links
    • Core-switch is configured as ABR having all three areas connected
    • OSPF area 0 is ae20 between the two core-switches

    Edge Firewall (SRX3600) Implementation

    The edge firewall role was tested and verified featuring the Juniper Networks® SRX3400 Services Gateway. The edge firewall implementation was configured with the following elements:

    • SRX Active/Backup cluster configured
    • reth1 configured toward edge routers in untrust zone
    • reth0 configured toward core-switch in trust zone
    • Security policy is configured for traffic from untrust zone to allow only access to DC applications
    • S-NAT is configured for Internet access to application servers (private address) to provide Internet access
    • D-NAT is configured for remote access to the data center for Pulse gateway internal IP address to Internet accessible IP address
    • Firewall is configured in OSPF area 1

    Edge routers (MX240) Implementation

    The edge routing role in the MetaFabric 1.0 solution features the Juniper Networks® MX240 3D Universal Edge Router. The edge routing was configured with the following elements:

    • MX240 pair configured as edge routers connected to service provider network for DC Internet access
    • Both the edge-r1 and edge-r2 EBGP peering with SP1 and SP2
    • IBGP is configured between edge-r1 and edge-r2 with next-hop self export policy
    • Local-preference is configured on SP1 as a preferred exit point to Internet
    • Condition based (based on Internet route) default route injection into OSPF is configured on both the edge-r1 and edge-r2 toward the firewall/core-switches for Internet access to VDC devices
    • Edge routers (two MX240s): Area1

    Compute (IBM Flex chassis) Implementation

    The computing role in the solution test labs was built using compute hardware from IBM, including the IBM Flex Chassis. This role in the solution was configured with the following elements:

    • o IBM Flex server is configured with multiple ESXi hosts hosting all the VMs running the business-critical applications (SharePoint, Exchange, Media-wiki, and WWW)
    • o Distributed vSwitch is configured between multiple physical ESXi hosts configured in IBM hosts

    OOB-Mgmt (EX4300-VC) Implementation

    The entire solution is managed out-of-band (OOB) featuring Juniper Networks® EX4200 Ethernet Switches with Virtual Chassis technology. The OOB management role was configured and tested with the following elements:

    • All the network device OOB connections are plugged into the EX4200-VC (100 m and 1 Gbps)
    • OOB-MGMT (EX4300-vc): OSPF Area 0
    • Two X IBM 3750 standalone servers are connected to the EX4300-VC hosting all the management VMs (vCenter, Junos Space, ND1.5, domain controller, and Pulse gateway)
    • VMware vSphere/Network Director 1.5 used to orchestrate the VMs on the test bed
    • Network Director 1.5 is used to configure or orchestrate network configuration and provisioning
    • Juniper Networks VGW gateway is configured on a VM to provide VM-to-VM application security

    Hardware and Software Requirements

    This implementation guide employs the hardware and software components shown in Table 1:

    Table 1: Hardware and Software deployed in solution testing

    Hardware

    Software

    Features

    QFX3000-M QFabric system

    13.1X50-D15

    VLANS, LAG, NNG, RSNG, RVI, OSPF

    EX9208

    13.2R3.2

    MC-LAG (Active/Active), OSPF, VLANs IRB

    SRX 3600

    12.1X44-D30.4

    Clustering, NAT, firewall rules

    MX480

    13.2R1.7

    MC-LAG ( Active/Active), OSPF, BGP

    F5 VIPRION 4480

    10.2.5 Build 591.0

    DSRM load balancing (direct server return mode)

    IBM Flex

    VMware ESXi 5.1

    Compute nodes 10-Gigabit and 40-Gigabit CNA and 10-Gigabit pass-thru

    IBMx3750

    VMware ESXi 5.1

    Standalone server

    EMC-VNX

    7.1.56-5

     

    Juniper (Firefly Host)

    5.5

    Application security (VMs)

    SA

    7.4R1.0

    SA VM appliance for remote access security

    In addition, Table 2 provides an overview of the network management and provisioning tools used to validate the solution.

    Table 2: Software deployed in MetaFabric 1.0 test bed

    Application

    Hardware installed

    Version

    Features

    Network Director

    VMs

    1.5

    Virtual-view (VM provisioning/monitoring)

    Security Director

    VMs

    13.1R1

    Provisioning and monitoring SRX 3600

    Junos space

    VMs

    13.1 R1

     

    Vmware vCenter

    VMs

    5.1

     

    Security Design

    VMs

     

    Not supported

    Service Now

    VMs

    13.1 R1

     

    The solution is configured with IP addressing as shown in Table 3:

    Table 3: Networks and VLANs Deployed in the Test Lab

    Network Subnets

    Network

    Gateway

    VLAN-ID

    Vlan-Name

    Network Devices

    10.94.47.0/27

    10.94.47.30

    804

    Network VLAN

    Security Devices

    10.94.47.32/27

    10.94.47.62

    801

    Security-VLAN

    Unused

    10.94.47.64/28

    10.94.47.78

    803

    Storage-VLAN

    Storage Devices

    10.94.47.80/28

    10.94.47.94

    800

    compute-vlan

    IBM Compute node Console IP

    10.94.47.96/27

    10.94.47.126

    800

    compute-VLAN

    ESX Compute Node Management IP

    10.94.47.128/25

    10.94.47.254

    800

    compute-VLAN

    VMs

    10.94.63.0/24

    10.94.63.254

      

    Internet Routable Subnets

        

    VDC App Server Internet IP (source NAT pool)

    10.94.127.0/27

    10.94.127.30

      

    SA IP address

    10.94.127.32/27

    10.94.127.62

      

    Unused

    10.94.127.64/26

    10.94.127.126

     

    Unrestricted address space for tester ports connected inside the VDC. No security policy for the address space.

    Server VIP address

    10.94.127.128/26

    10.94.127.190

     

    Publicly available applications in VDC.

    LAN client VM and Traffic Generator address

    10.94.127.192/27

    10.94.127.222

     

    Address space for VMs on the external network for simulation of client traffic

    SP1 address

    10.94.127.224/28

      

    Address space further subdivided for point-to-point links inside SP1 cloud. No gateway.

    SP2 address

    0.94.127.240/28

      

    Address space further subdivided for point-to-point links inside SP2 cloud. No gateway.

    Applications tested as part of the solution were configured with address space shown in Table 4:

    Table 4: Applications Tested in the MetaFabric 1.0 Solution

    Application

    External address

    Internal address

    Vlan-ID

    Gateway

    SA

    10.94.127.33

    10.94.63.24

    810

    OOB-MGMT

    Exchange

    10.94.127.181

    172.16.4.10 -12

    104

    POD1-SW1

    SP

    10.94.127.180

    172.16.2.11 -14

    102

    POD2-SW1

    WM

    10.94.127.182

    172.16.3.11

    103

    POD1-SW1

    Multi-chassis LAG is used in the solution between core and aggregation or access to enable always-up, loop-free, and load-balanced traffic between the switching roles in the data center (Figure 1).

    Figure 1: MC-LAG Active/Active Logical Topology

    MC-LAG Active/Active Logical Topology

    Table 5 shows the configuration parameters used in the configuration of MC-LAG between the VDC-core-sw1 and the edge-r1 nodes. These settings are used throughout the configuration and are aggregated here.

    Table 5: MC-LAG Configuration Parameters

    MC-LAG Node

    MC-LAG client

    interface

    mc-ae-id

    LACP-id

    IRB interface

    prefer-status

    chassis-id

    VDC-edge-r2

    VDC-edge-fw0

    ae1

    1

    00:00:00:00:00:01

    irb.0

    active

    1

    VDC-edge-r2

    VDC-edge-fw1

    ae3

    2

    00:00:00:00:00:02

    irb.0

    active

    1

    VDC-core-sw2

    VDC-pod1-sw1

    ae0

    1

    00:00:00:00:00:01

    irb.50

    active

    1

    VDC-core-sw2

    VDC-pod1-sw1

    ae1

    2

    00:00:00:00:00:02

    irb.51

    active

    1

    VDC-core-sw2

    VDC-pod1-sw1

    ae2

    3

    00:00:00:00:00:03

    irb.52

    active

    1

    VDC-core-sw2

    VDC-pod1-sw1

    ae3

    4

    00:00:00:00:00:04

    irb.53

    active

    1

    VDC-core-sw2

    VDC-pod2-sw1

    ae4

    5

    00:00:00:00:00:05

    irb.54

    active

    1

    VDC-core-sw2

    VDC-pod2-sw1

    ae5

    6

    00:00:00:00:00:06

    irb.55

    active

    1

    VDC-core-sw2

    VDC-edge-fw0

    ae6

    7

    00:00:00:00:00:07

    irb.10

    active

    VDC-core-sw2

    VDC--edge-fw1

    ae7

    8

    00:00:00:00:00:08

    irb.10

    active

    1

    VDC-core-sw2

    VDC-oob-mgmt

    ae8

    9

    00:00:00:00:00:09

    irb.20

    active

    1

    VDC-core-sw2

    VDC-lb1-L2-Int-standby

    ae10

    11

    00:00:00:00:00:11

    NA

    active

    1

    VDC-core-sw2

    VDC-lb1-L3-Ext-active

    ae11

    12

    00:00:00:00:00:12

    irb.15

    active

    1

    VDC-core-sw2

    VDC-lb1-L3-Ext-standby

    ae12

    13

    00:00:00:00:00:13

    irb.15

    active

    1

    VDC-core-sw2

    VDC-lb1-L2-Int-active

    ae13

    14

    00:00:00:00:00:14

    NA

    active

    1

    The physical and logical configuration of the core-to-POD roles in the data center are shown in Figure 2. The connectivity between these layers features 24-link AE bundles (4 per pod with a total of 96 AE member interfaces between each POD and the core). The local topology within each data center role are detailed in later sections of this guide.

    Figure 2: Topology of Core-to-POD Roles in the Data Center

    Topology of Core-to-POD Roles in the
Data Center

    The configuration of integrated routing and bridging (IRB) interfaces within this segment of the data center is outlined in Table 6.

    Table 6: IRB, IP Address Mapping

    IRB interface

    IP address

    MC-LAG client

    Client Interface

    IP address

    Transport VLAN

    VRRP IP

    Description

    irb.0

    192.168.26.1

    VDC-edge-fw0

    reth1

    192.168.26.3

    11

    192.168.26.254

    Untrust-Edge-fw

    irb.0

    192.168.26.1

    VDC-edge-fw1

    reth1

    192.168.26.3

    11

     

    Untrust-Edge-fw

            

    irb.50

    192.168.50.1

    VDC-pod1-sw1

    nw-ng-0:ae0

    192.168.50.3

    50

    192.168.50.254

    POD1-uplink-1

    irb.51

    192.168.51.1

    VDC-pod1-sw1

    nw-ng-0:ae1

    192.168.51.3

    51

    192.168.51.254

    POD1-uplink-1

    irb.52

    192.168.52.1

    VDC-pod1-sw1

    nw-ng-0:ae2

    192.168.52.3

    52

    192.168.52.254

    POD1-uplink-3

    irb.53

    192.168.53.1

    VDC-pod1-sw1

    nw-ng-0:ae3

    192.168.53.3

    53

    192.168.53.254

    POD1-uplink-4

    irb.54

    192.168.54.1

    VDC-pod2-sw1

    ae0

    192.168.54.3

    54

    192.168.54.254

    POD2-uplink-1

    irb.55

    192.168.55.1

    VDC-pod2-sw1

    ae1

    192.168.55.3

    55

    192.168.55.254

    POD2-uplink-2

    irb.10

    192.168.25.1

    VDC-edge-fw0

    reth0

    192.168.25.3

    10

    192.168.25.254

    Trust-Edge-fw

    irb.10

    192.168.25.1

    VDC--edge-fw1

    reth0

    192.168.25.3

    10

    192.168.25.254

    Trust-Edge-fw

    irb.20

    192.168.20.1

    VDC-oob-mgmt

    ae0

    192.168.20.3

    20

    192.168.20.254

    OOB-MGMT-sw

    NA

     

    VDC-lb1-L2-Int-standby

    core-sw

       

    Layer-2-server-facing

    irb.15

    192.168.15.1

    VDC-lb1-L3-Ext-active

    External

    192.168.15.5

    15

    192.168.15.254

    layer3-External-link

    irb.15

    192.168.15.1

    VDC-lb1-L3-Ext-standby

    External

    192.168.15.5

    15

    192.168.15.254

    Layer3-External-link

    NA

     

    VDC-lb1-L2-Int-active

    core-sw

       

    Layer-2-server-facing

    Published: 2015-04-20