Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring the Enterprise Data Center Solution

 

This example describes how to build the Enterprise Data Center solution.

This example is intended as a reference architecture that has been validated by Juniper Networks. The example contains one method of configuring each feature, with explanatory text and pointers to additional information sources to provide greater detail when your Enterprise data center network has different requirements than the reference architecture provided in this example.

Requirements

Table 1 lists the hardware and software components used in this example.

Table 1: Solution Hardware and Software Requirements

Device

Hardware

Software

Core Router

MX480 routers*

Junos OS Release 13.2R1 or later

Aggregation devices

QFX10002-72Q switches**

Junos OS Release 17.2R1 or later

Satellite devices

QFX5100 switches***

EX4300 switches***

Satellite software version 3.0R1

Satellite devices must be running Junos OS Release 14.1X53-D43 or later before conversion into a satellite device.

* An MX480 router is used in this solution because of it’s ability to scale in an Enterprise data center. The device to use in the core layer depends largely on the bandwidth requirements and feature support needs of each individual data center. See MX960, MX480, MX240, MX104 and MX80 3D Universal Edge Routers Data Sheet   or QFX10000 Modular Ethernet Switches Data Sheet   for information on other devices that are commonly deployed at the core layer in Enterprise data center networks.

** QFX10002-72Q switches are needed to implement the solution in this reference architecture because the switch has 72 40-Gbps QSFP+ interfaces and can therefore support the large number of 40-Gbps QSFP+ interfaces utilized in this topology. QFX10002-36Q switches have 36 40-Gbps QSFP+ interfaces and can be used as aggregation devices to implement this solution in smaller environments that require fewer satellite devices or network-facing interfaces, or in environments that conserve 40-Gbps QSFP+ interfaces by using breakout cables to create multiple 10-Gbps SFP+ cascade port interfaces.

*** All EX4300 and QFX5100 switches that can be converted into satellite devices for a Junos Fusion Data Center when the aggregation device is running Junos OS Release 17.2R1 or later can be used as satellite devices in this topology. The following switches can be converted into satellite devices: QFX5100-24Q-2P, QFX5100-48S-6Q, QFX5100-48T-6Q, QFX5100-96S-8Q, EX4300-24T, EX4300-32F, EX4300-48T, and EX4300-48T-BF. Any combination of these switches can be used as satellite devices, with the requirement that Junos Fusion Data Center supports up to 64 total satellite devices.

See Understanding Junos Fusion Data Center Software and Hardware Requirements for information on supported satellite devices in a Junos Fusion Data Center.

Overview and Topology

The topology used in this example consists of one MX480 3D Universal Edge Router, two QFX10002-72Q switches acting as aggregation devices, and sixty-four QFX5100 and EX4300 switches acting as satellite devices. The topology is shown in Figure 1.

Figure 1: Enterprise Data Center Solution Topology
Enterprise Data Center
Solution Topology

Core Layer: MX480 Universal Edge 3D Router Interfaces Summary

The MX480 Universal Edge 3D Router connects to each QFX10002-72Q switch using an aggregated Ethernet interface—ae100 to connect to aggregation device 1 and ae101 to connect to aggregation device 2—that contains six 40-Gbps QSFP member interfaces.

Table 2 summarizes the aggregated Ethernet interfaces on the MX480 router.

Table 2: MX480 Router Interfaces Summary

Aggregated Ethernet Interface

Member Interfaces

IP Address

Purpose

ae100

et-3/2/0

et-3/2/1

et-3/2/2

et-4/2/0

et-4/2/1

et-4/2/2

10.0.1.1/24

Connects the MX480 router to the QFX10002-72Q switch acting as aggregation device 1.

 

ae101

et-3/3/0

et-3/3/1

et-3/3/2

et-4/3/0

et-4/3/1

et-4/3/2

10.0.2.1/24

Connects the MX480 router to the QFX10002-72Q switch acting as aggregation device 2.

Aggregation Layer: QFX10002-72Q Switches Interfaces Summary

A QFX10002-72Q switch has seventy-two 40-Gbps interfaces. A 40-Gbps interface on a QFX10002-72Q switch can be converted into four 10-Gbps interfaces using a breakout cable.

Both QFX10002-72Q switches in the Enterprise Data Center solution are cabled identically, with the first sixty-four 40-Gbps interfaces—et-0/0/0 through et-0/0/63—connected as cascade ports to the EX4300 and QFX5100 switches acting as satellite devices. Cascade ports are ports on aggregation devices that connect to satellite devices in a Junos Fusion topology.

The next two 40-Gbps interfaces on the front panel of the QFX10002-72Q switches—et-0/0/64 and et-0/0/65—are configured into an aggregated Ethernet interface that functions as the ICL between aggregation devices. A Junos Fusion Data Center with dual aggregation devices is built using an MC-LAG topology, and therefore must have an interchassis link (ICL) to pass data traffic between peers while also supporting the Inter-Chassis Control Protocol (ICCP) to send and receive control traffic. The ICL carries data traffic between aggregation devices in this topology. ICCP control traffic, which is used to send control information between devices in an MC-LAG topology, has it’s own link in some MC-LAG topologies but is sent over the ICL in the Enterprise Data Center topology, thereby preserving a 40-Gbps interface for other networking purposes.

The remaining interfaces on each aggregation device—et-0/0/66, et-0/0/67, et-0/0/68, et-0/0/69, et-0/0/70, and et-0/0/71—are aggregated into a single aggregated Ethernet interface and are used as uplink interfaces to connect the QFX10002-72Q switches to the MX480 router at the core layer.

Figure 2 summarizes the role of each interface on the QFX10002-72Q switches in this solution topology.

Figure 2: QFX10002-72Q Interfaces Summary
QFX10002-72Q
Interfaces Summary

Table 3 summarizes the purpose of each interface on the QFX10002-72Q switch in this solution topology.

Table 3: QFX10002-72Q Switches Interfaces Summary

Interface Numbers

Interface Type

Purpose

et-0/0/0 through et-0/0/63

Cascade ports

Connects the QFX10002-72Q aggregation device switches to QFX5100 or EX4300 satellite device switches.

et-0/0/64 and et-0/0/65

Interchassis link (ICL)

Connects the QFX10002-72Q aggregation device switches together and passes data traffic between them.

ae999

et-0/0/64 and et-0/0/65 are the member interfaces in aggregated Ethernet interface ae999.

et-0/0/66 through et-0/0/71

Network ports

Connects the QFX10002-72Q aggregation device switches to the MX480 router

ae100

et-0/0/66, et-0/0/67, et-0/0/68, et-0/0/69, et-0/0/70, and et-0/0/71 are the member interfaces in aggregated Ethernet interface ae100.

Access Layer: FPC ID Numbering and Cascade Port Summary

The access layer in this topology is the QFX5100 and EX4300 switches configured into satellite devices. The access layer devices are responsible for providing the access interfaces that connect endpoint devices to the network.

Each satellite device in a Junos Fusion Data Center is assigned an FPC ID number. FPC ID numbers are used to identify satellite devices within a Junos Fusion.

A cascade port in a Junos Fusion is a port on the aggregation device—in the Enterprise Data Center solution, the aggregation devices are the QFX10002-72Q switches—that connects to a satellite device. Cascade ports forward and receive traffic to and from the satellite devices.

See the Assigning Cascade Ports to FPC ID Numbers and Creating Satellite Device Aliases section for additional information on FPC ID numbers and cascade ports.

Table 4 provides a summary of each satellite device’s hardware model, FPC ID number, alias name, and associated cascade port.

Table 4: Satellite Device and Cascade Port Summary

FPC ID Number

Hardware Model

Alias Names

Cascade Port Interface (on QFX10002-72Q Switch Aggregation Devices)****

100-139

QFX5100

qfx5100-sd100 through qfx5100-sd139

et-0/0/0 through et-0/0/39

One 40-Gbps cascade port interface is connected to each QFX5100 switch operating as a satellite device.

140-156

EX4300

ex4300-sd140 through ex4300-sd156

et-0/0/40 through et-0/0/56

One 40-Gbps cascade port interface is connected to each EX4300 switch operating as a satellite device.

157-160

EX4300

ex4300-sd157 through ex4300-sd160

et-0/0/57:0 through et-0/0/57:3

et-0/0/57 is converted from one 40-Gbps interface into four 10-Gbps channelized interfaces using a breakout cable.

One 10-Gbps cascade port interface is connected to each EX4300 switch operating as a satellite device.

161-163

EX4300

ex4300-sd161 through ex4300-sd163

et-0/0/58 through et-0/0/63

Two 40-Gbps cascade port interfaces are connected to each EX4300 switch operating as a satellite device.

**** The two QFX10002-72Q switches in this topology have identical cascade port interface configurations. The port numbers are, therefore, applied identically on each QFX10002-72Q switch.

Access Devices: Link Aggregation Groups Overview

The access ports on the satellite devices in the Enterprise Data Center solution topology can be used to connect any endpoint devices to the data center. Access ports on satellite devices in a Junos Fusion are also called extended ports.

Endpoint devices can be single-homed to a single extended port or multi-homed to multiple extended ports.

To maximize fault tolerance and increase high availability, it is often advisable to multi-home an endpoint device to two or more extended ports on different satellite devices to ensure traffic flow continues when a single satellite device fails. The multi-homed links can be configured into an aggregated Ethernet interface to better manage traffic flows and simplify network manageability.

Figure 3 illustrates six servers using multi-homed links to extended ports on different satellite devices in this topology. Each server is multi-homed to two satellite devices using member links that are part of the same aggregated Ethernet interface.

Figure 3: Aggregated Ethernet Interfaces
Aggregated Ethernet
Interfaces

Table 5: Aggregated Ethernet Access Interface Summary

Aggregated Ethernet Interface Name

Member Interfaces

VLANs

ae1

ge-101/0/22

ge-102/0/22

100

ae2

ge-101/0/23

ge-102/0/23

100

ae3

ge-101/0/24

ge-102/0/24

100

ae4

ge-103/0/22

ge-104/0/22

200

ae5

ge-103/0/23

ge-104/0/23

200

ae6

ge-103/0/24

ge-104/0/24

200

A typical data center deployment often utilizes numerous aggregated Ethernet interfaces to connect endpoint devices to the network. The topology in this solutions guide minimizes the total number of aggregated Ethernet interfaces in the topology to six to better focus the configuration procedure.

See the Enabling an Aggregated Ethernet Interface For Access Interfaces section of this Solutions Guide to configure aggregated Ethernet interfaces for endpoint devices in this topology.

IP Addressing Summary

Table 6 summarizes the IP addresses used in this topology.

Table 6: IP Addressing Summary

Interface

IP Address

Purpose

MX480 Core Router

ae100

10.0.1.1/24

Aggregated Ethernet interface to AD1

ae101

10.0.2.1/24

Aggregated Ethernet interface to AD2

lo0.10

192.168.100.5

Loopback interface used in OSPF and PIM configuration

QFX10002-72Q Switch (Aggregation Device 1)

ae100

10.0.1.100/24

Aggregated Ethernet interface to R1

em1

192.168.255.40

Management port also used to ping between aggregation devices.

irb.100

10.1.1.1/24

IRB interface associated with VLAN 100

irb.200

10.2.2.1/24

IRB interface associated with VLAN 200

ae999.32769

10.0.0.1/30

IP address created by automatic ICCP provisioning and used by ICCP and BFD over the ICL.

lo0.10

192.168.100.1

Loopback interface used in OSPF and PIM configuration

QFX10002-72Q Switch (Aggregation Device 2)

ae100

10.0.2.100/24

Aggregated Ethernet interface to R1

em1

192.168.255.41

Management port also used to ping between aggregation devices.

irb.100

10.1.1.1/24

IRB interface associated with VLAN 100

irb.200

10.2.2.1/24

IRB interface associated with VLAN 200

ae999.32769

10.0.0.2/30

IP address created by automatic ICCP provisioning and used by ICCP and BFD over the ICL.

lo0.10

192.168.100.2

Loopback interface used in OSPF and PIM configuration

Virtual Routing Instances Summary

The Enterprise Data Center solution topology uses virtual routing instances to enable EBGP, OSPF, DHCP Relay, and PIM. Virtual routing instances allow the devices in a topology to support multiple routing tables on each device in a topology. The separate routing tables allow the topology to completely isolate traffic into separate “virtual” networks with their own routing tables, protocols, and other requirements. This traffic isolation can serve many purposes in a data center network, including isolation between customer networks in a multi-tenant data center or isolation between traffic handling when different users or traffic needs to be isolated in a non-shared Enterprise data center network.

Multiple routing instances are configured in this topology to separate EBGP and OSPF configurations. EBGP and OSPF configurations are not typically configured simultaneously in an Enterprise Data Center topology due to the overhead of maintaining two routing protocols in a data center, although the configuration is possible. The two virtual routing instances are enabled on the same interfaces—ae100 and ae101 on the MX480 router and ae100 on both QFX10002-72Q switches—in this topology. A single device interface, however, can only support one virtual routing instance per interface so you could not implement both routing instances in your network. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your network.

Table 7 summarizes the virtual routing instances in the Enterprise Data Center solution topology.

Note

Table 7 is provided to show which features are included in the virtual routing instances in this reference architecture only. You can configure OSPF, EBGP, DHCP Relay, and PIM-SM in any routing instance that requires the functionality. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your network.

Table 7: Virtual Routing Instances Summary

Virtual Routing Instance Name

Participating Devices

OSPF

EBGP

DHCP Relay

PIM Sparse Mode

vr-10

MX480

QFX10002 (AD1)

QFX10002 (AD2)

  

vr-20

MX480

QFX10002 (AD1)

QFX10002 (AD2)

 

 

Configuration

This section provides the configuration steps needed to implement this solution.

It contains the following sections:

Configuring Commit Synchronization Between Aggregation Devices

Step-by-Step Procedure

Commit synchronization is used in the Enterprise Data Center solution to simplify administration tasks between aggregation devices.

The Enterprise Data Center solution uses a Junos Fusion Data Center topology that often requires matching configurations on both aggregation devices to support a feature.

Configuration synchronization simplifies administration of the Junos Fusion Data Center in this solution by allowing users to enter commands once in a configuration group and apply the configuration group to both aggregation devices rather than repeating a configuration procedure manually on each aggregation device. Configuration groups are used extensively in the Enterprise Data Center solution for this management simplicity.

The Junos Fusion Data Center setup in this solution is a multichassis link aggregation (MC-LAG) topology. For additional information on commit synchronization in an MC-LAG, see Understanding MC-LAG Configuration Synchronization.

Note

This document assumes that basic network configuration has been done for all devices in the topology, including hostname configuration, DNS setup, and basic IP configuration setup.

See Junos OS Basics Feature Guide for QFX10000 Switches if you need to setup basic network connectivity on your QFX10002-72Q switches before starting this procedure.

  1. Ensure the aggregation devices are reachable from one another:

    Aggregation device 1:

    Aggregation device 2:

    If the devices cannot ping one another, try statically mapping the IP addresses of each device’s management IP address and retry the ping.

    Aggregation device 1:

    Aggregation device 2:

    If the devices cannot ping one another after the IP addresses are statically mapped, see Configuring a QFX10000 or theJunos OS Basics Feature Guide for QFX10000 Switches.

  2. Enable commit synchronization:

    Aggregation device 1:

    Aggregation device 2:

  3. Configure each aggregation device so that the other aggregation device is identified as a commit peer. Enter the authentication credentials of each peer aggregation device to ensure group configurations on one aggregation device are committed to the other aggregation device.Warning

    The password password is used in this configuration step for illustrative purposes only. Use a more secure password in your device configuration.

    Note

    This step assumes a user with an authentication password has already been created on each QFX10002 switch acting as an aggregation device. For instructions on configuring username and password combinations, see Configuring a QFX10000.

    Aggregation device 1:

    Aggregation device 2:

  4. Enable the Network Configuration (NETCONF) protocol over SSH:

    Aggregation device 1:

    Aggregation device 2:

  5. Commit the configuration:

    Aggregation device 1:

    Aggregation device 2:

  6. (Optional) Create a configuration group for testing to ensure configuration synchronization is working:

    Aggregation Device 1:

    Aggregation Device 2:

  7. (Optional) Configure and commit a group on aggregation device 1, and confirm it is implemented on aggregation device 2:

    Aggregation device 1:

    Aggregation device 2:

    Perform the same procedure to verify configuration synchronization from aggregation device 2 to aggregation device 1, if desired.

    Delete the test configuration group on each aggregation device.

    Aggregation device 1:

    Aggregation device 2:

    Note

    All subsequent procedures in this Solutions Guide assume that commit synchronization is enabled on both QFX10002-72Q switches acting as aggregation devices, and that the aggregation devices are configured as peers in each configuration group.

Step-by-Step Procedure

Configuring the Aggregated Ethernet Interfaces Connecting the MX480 Router to the QFX10002-72Q Switches

Step-by-Step Procedure

The bandwidth requirements for the links connecting the QFX10002-72Q switch to the MX480 router can vary widely between Enterprise data center networks, and is largely dependent on the bandwidth needs of a specific Enterprise data network. The available hardware interfaces—in particular, the hardware interfaces available in the modular interface slots of the MX480 router—can also impact which interfaces are used to connect the QFX10002-72Q switches to the MX480 router.

The remainder of this reference architecture assumes that two 6x40GE + 24x10GE MPC5EQ MPCs are installed in the MX480 router in slots 3 and 4.

The MX480 core router in the Enterprise Data Center solution uses two six member link aggregated Ethernet interfaces—one that connects to the QFX10002-72Q switch acting as aggregation device 1 and another that connects to the QFX10002-72Q switch acting as aggregation device 2—to provide a path for Layer 3 traffic in the topology. Each individual aggregated Ethernet interface contains six 40-Gbps QSFP+ member links, providing 240-Gbps total throughput.

Another usable option for this uplink connection would be to configure interfaces et-0/0/67 and et-0/0/71 as 100-Gbps interfaces to provide 200-Gbps total throughput between the MX480 router and each QFX10002-72Q switch, using two 100-Gbps cables instead of six 40-Gbps cables. This option would require you to disable the other 40-Gbps interfaces—et-0/0/66, et-0/0/68, et-0/0/69, and et-0/0/70—on the QFX10002-72Q switches, however, and provides slightly less bandwidth than using all six 40-Gbps QSFP+ interfaces. For information on using 100-Gbps interfaces for a QFX10002-72Q switch, see QFX10002-72Q Port Panel .

Link Aggregation Control Protocol (LACP) is used in each aggregated Ethernet interface to provide additional functionality for LAGs, including the ability to help prevent communication failures by detecting misconfigurations within a LAG.

Figure 4 illustrates the MX480 router links to the QFX10002-72Q switches in the Enterprise Data Center solution.

Figure 4: MX480 Router to QFX10002-72Q Switch Connections
MX480 Router to QFX10002-72Q
Switch Connections

For additional information on aggregated Ethernet interfaces and LACP, see Understanding Aggregated Ethernet Interfaces and LACP and Configuring Link Aggregation.

To configure the aggregated Ethernet interfaces connecting the MX480 router to the QFX10002-72 switches in the Enterprise Data Center topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Set the maximum number of aggregated Ethernet interfaces permitted on the switch and router.

    The aggregated Ethernet device count value is set at 1000 on the MX router and both aggregation devices to avoid any potential complications with aggregated Ethernet interface configurations in this topology. This approach can create multiple empty, unused aggregated Ethernet interfaces with globally unique MAC addresses on the aggregation device. You can simplify network administration by setting the device count to the number of aggregated Ethernet devices that you are using on your aggregation device, if desired.

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1 or 2)

    Note

    A device count must be set whenever an aggregated Ethernet interface is configured. Aggregated Ethernet interfaces are configured in other procedures in this document, and the aggregated Ethernet device count is set as part of those procedures. You can skip this step if the aggregated Ethernet device count has already been set.

    Note

    The defaults for minimum links and link speed are maintained for the aggregated Ethernet interfaces configured in this solution. There is no need to change the default link speed setting or the default minimum links setting. The default minimum links setting, which can be changed by entering the set interfaces aeX aggregated-ether-options minimum-links number-of-minimum-links, is 1.

  3. Create and name the aggregated Ethernet interfaces, and optionally assign a description to them:

    MX480 Router

    QFX10002 Switch (Aggregation Device 1 or 2)

    Note

    The QFX10002-72Q switches use the same aggregated Ethernet interfaces and names throughout this procedure, and can therefore be configured using shared groups.

    The MX480 router is not synchronizing it’s configuration with other devices, and is therefore configured outside of shared configuration groups.

  4. Assign interfaces to each aggregated Ethernet interface:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1 or 2):

  5. Assign an IP address for each aggregated Ethernet interface.

    Because IP addresses are local values, assign the IP address outside of the group configuration.

    MX480 Router:

    Aggregation Device 1:

    Aggregation Device 2:

  6. Enable LACP for the aggregated Ethernet interfaces and set them into active mode:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1 or 2)

  7. Set the interval at which the interfaces send LACP packets.

    The Enterprise Data Center solution sets the LACP periodic interval as fast, which sends an LACP packet every second.

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1 or 2)

  8. After the aggregated Ethernet configuration is committed, confirm that the aggregated Ethernet interface is enabled, that the physical link is up, and that packets are being transmitted if traffic has been sent:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1 or 2):

  9. After committing the configuration, confirm the LACP status is Active and that the receive state is Current for each link.

    The output below provides the status for interface et-3/2/0.

    Repeat this step for each link in the aggregated Ethernet bundle.

Assigning Cascade Ports to FPC ID Numbers and Creating Satellite Device Aliases

Step-by-Step Procedure

This procedure provides the instructions to map cascade port interfaces to the FPC ID numbers of connected satellite devices.

In a Junos Fusion Data Center, the port on the aggregation device that connects to a satellite device is called a cascade port. All network and control traffic sent between an aggregation device and a satellite device traverses a cascade port.

Figure 5 illustrates the location of cascade ports in a Junos Fusion.

Figure 5: Cascade Ports in a Junos Fusion
Cascade Ports in a Junos
Fusion

The Enterprise Data Center reference topology uses the native 40-Gbps QSFP+ interfaces as well 10-Gbps interfaces channelized from the native 40-Gbps QSFP+ interfaces as cascade ports. An aggregation device can use one or more cascade ports to connect to a satellite device.

An FPC ID number is an identification number assigned to each satellite device in a Junos Fusion topology. Every satellite device in a Junos Fusion topology is assigned an FPC ID number.

Each satellite device in this topology is also assigned an alias. Aliases are optional but recommended attributes that assist with satellite device identification and network management.

Figure 6 illustrates the cascade port to satellite device connections for some links in the satellite devices in the Enterprise Data Center solution topology. The figure does not include all cascade port to satellite device links for the topology for readability reasons.

Figure 6: Cascade Port to Satellite Device Connections
Cascade Port to
Satellite Device Connections

To assign cascade ports, FPC IDs, and satellite device aliases to the Enterprise Data Center solution topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Configure the interfaces on the QFX10002 switch acting in the aggregation device role into cascade ports. As part of this process, assign an FPC ID number and alias to each satellite device.Caution

    This procedure uses group configurations to simplify FPC ID and cascade port configurations because the cascade port and FPC ID configurations are identical on both aggregation devices.

    Use manual configuration to configure FPC IDs and cascade ports on each aggregation device if your aggregation devices have different cascade port and FPC ID configurations.

    • To configure 40-Gbps QSFP+ interfaces et-0/0/0 through et-0/0/56 as cascade ports to FPC IDs 100 through 156:

      Aggregation device 1 or 2:

    • To configure each of the 4 10-Gbps channelized interfaces from et-0/0/57—et-0/0/57:0, et-0/0/57:1, et-0/0/57:2, and et-0/0/57:3—as cascade ports to FPC IDs 157 through 160:

      Aggregation device 1 or 2:

    • To configure two cascade ports each from interfaces et-0/0/58 through et-0/0/63 to FPC IDs 161, 162, and 163:

      Aggregation device 1 or 2:

  3. Commit the configuration.

    Because commit synchronization is enabled and this configuration is done in configuration groups, the configuration in the group is committed to aggregation device 2 as well as on aggregation device 1.

Converting Interfaces into Cascade Ports

Step-by-Step Procedure

FPC ID numbers were assigned to cascade ports in the prior procedure. However, an interface on an aggregation device must also be explicitly configured into a cascade port before it can function as a cascade port.

Follow the instructions in this section to configure interfaces into cascade ports.

For a comprehensive configuration example of this procedure that includes configuration of every cascade port configuration in the solution, see Appendix: Enterprise Data Center Solution Complete Configuration.

To configure interfaces on the aggregation device into cascade ports:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Configure each cascade port interface into a cascade port:
  3. Commit the configuration.

    Because commit synchronization is enabled and this configuration is done in configuration groups, the configuration in the group is committed to aggregation device 2 as well as on aggregation device 1.

Configuring the Aggregated Ethernet Interfaces for the Interchassis Link (ICL)

Step-by-Step Procedure

The aggregation devices in a Junos Fusion Data Center topology are MC-LAG peers. MC-LAG peers use an interchassis link (ICL), also known as the interchassis link-protection link (ICL-PL), to provide a redundant path across the MC-LAG topology when a link failure (for example, an MC-LAG trunk failure) occurs on an active link.

MC-LAG peers use the Inter-Chassis Control Protocol (ICCP) to exchange control information and coordinate with one another to ensure that data traffic is forwarded properly. ICCP traffic is also sent over the ICL in this solution topology, although some MC-LAG implementations use a separate link for ICCP traffic. Junos Fusion Data Center supports automatic ICCP provisioning, a feature that automatically provisions ICCP traffic to be sent across the ICL without user configuration. Automatic ICCP provisioning is enabled by default, so no user configuration is required to enable ICCP in this solution topology.

See Multichassis Link Aggregation Features, Terms, and Best Practices for additional information on ICLs and ICCP.

In the Enterprise Data Center topology, an aggregated Ethernet interface—ae999— with two member interfaces—et-0/0/64 and et-0/0/65—provides the ICL on each aggregation device.

Figure 7 illustrates the ICL for the Enterprise Data Center solution:

To configure the ICL:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. On aggregation device 1, create a group for the ICL link configuration and set the aggregated Ethernet device count for the aggregation devices:

    Aggregation Device 1:

    Note

    A device count must be set whenever an aggregated Ethernet interface is configured. Aggregated Ethernet interfaces are configured in other procedures in this document, and the aggregated Ethernet device count is set as part of those procedures. You can skip this step if the aggregated Ethernet device count has already been set.

    Note

    This approach can create multiple empty, unused aggregated Ethernet interfaces with globally unique MAC addresses on the aggregation device. You can simplify network administration by setting the device count to the number of aggregated Ethernet devices that you are using on your aggregation device.

  3. Create the aggregated Ethernet interface that will function as the ICL, and optionally add a description to the interface:

    Aggregation Device 1:

  4. Add the member links to the aggregated Ethernet interface:

    Aggregation Device 1:

  5. Enable LACP for the aggregated Ethernet interface and configure the LACP packet interval:

    Aggregation Device 1:

  6. Configure the ICL aggregated Ethernet interface as a trunk interface, and configure it as a member of all VLANs:

    Aggregation Device 1:

    The ICL aggregated Ethernet interface is now configured. The aggregated Ethernet interface is converted into an ICL in the next section, as part of the procedure to configure dual aggregation device support.

Configuring Dual Aggregation Device Support

Step-by-Step Procedure

The Enterprise Data Center topology is a Junos Fusion Data Center architecture with dual aggregation devices.

A Junos Fusion Data Center architecture with dual aggregation devices is enabled by configuring all devices in the Junos Fusion Data Center topology into a redundancy group. The ICL is defined as part of the redundancy group configuration.

This procedure shows how to configure dual aggregation device support for the Enterprise Data Center solution topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. (Optional unless single-home was previously configured on the aggregation device) Delete single home configuration mode on each QFX10002 switch to ensure single-home configuration is disabled:

    Aggregation device 1:

    Aggregation device 2:

  3. Create the satellite management redundancy group.

    In a Junos Fusion Data Center topology, both aggregation devices and all satellite devices must be part of the same redundancy group.

    Aggregation device 1:

  4. Add all satellite devices to the redundancy groups.
  5. Define the chassis ID number of each aggregation device. The chassis ID is a local parameter for each aggregation device, and should therefore be configured outside of a configuration group.

    Aggregation device 1:

    Aggregation device 2:

  6. Define the peer chassis ID number—the chassis ID number of the other aggregation device—and interface to use for the ICL on each aggregation device.

    The peer chassis ID number is a local parameter for each aggregation device, and should therefore be configured outside of a configuration group.

    Aggregation device 1:

    Aggregation device 2:

  7. Commit the configuration individually on each aggregation device.

    Aggregation device 1:

    Aggregation device 2:

    The portions of this configuration that were configured in groups are committed to aggregation device 1 and 2, since commit synchronization is enabled.

  8. Confirm that ICCP is operational between the peers:

    This step assumes that the redundancy groups and the aggregated Ethernet interface for the ICCP link have been configured and committed.

    ICCP is automatically provisioned in this topology, since the automatic ICCP provisioning feature is automatically enabled in dual aggregation device topologies by default and is not altered in this configuration procedure.

    Aggregation device 1:

    Aggregation device 2:

Configuring Bidirectional Forwarding Detection (BFD) over the ICL

Step-by-Step Procedure

The Bidirectional Forwarding Detection (BFD) protocol is a simple hello mechanism that can quickly detect a link failure in a network. BFD hello packets are sent at a specified, regular interval. A neighbor failure is detected when a device doesn’t receive a reply to a BFD hello message within a specified interval.

In the Enterprise Data Center topology, BFD is used to provide link failure detection for the ICL. BFD sends hello packets between the aggregation devices over the ICL connecting the aggregation devices.

To configure BFD over the ICL for the Enterprise Data Center solution:

  1. Configure the BFD liveness detection parameters on each aggregation device.

    We recommend configuring minimum intervals of 2000 to ensure stability in the MC-LAG configuration.

    Aggregation device 1:

    Aggregation device 2:

  2. After committing the configuration, verify that BFD state to the peer aggregation device is operational:

    Aggregation device 1:

    Aggregation device 2:

Enabling Automatic Satellite Device Conversion

Step-by-Step Procedure

Automatic satellite device conversion automatically converts a switch running Junos OS into a satellite device upon cabling, assuming all other configuration prerequisites—the satellite device is a model of switch that can be converted into a satellite device and is running a version of Junos OS that supports conversion, cascade ports and FPC ID numbering is configured and enabled, and satellite software upgrade groups are created so the satellite device can retrieve satellite software—are met. The steps for creating satellite software upgrade groups are provided in the next section of this guide; all of the other pre-requisite steps were done in earlier sections of this guide.

Although other methods of converting a switch into a satellite device exist, this solution uses automatic satellite conversion exclusively to convert switches running Junos OS into satellite devices.

To enable automatic satellite device conversion for all satellite devices connected to a cascade port on an aggregation device:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable automatic satellite conversion:

    Automatic satellite conversion is recommended for this solution but other satellite software upgrade methods exist. See Configuring or Expanding a Junos Fusion Data Center.

Installing and Managing the Satellite Software

Step-by-Step Procedure

Satellite devices in a Junos Fusion Data Center run satellite software.

Satellite software upgrade groups must be created on the aggregation devices to manage satellite software installations. The topology in this solution uses two satellite software upgrade groups. One satellite software upgrade group is used to install satellite software onto all EX4300 switches acting as satellite devices; the other is used to install software onto all QFX5100 switches acting as satellite devices.

The same version of satellite software—satellite software version 3.0R1—runs on EX4300 and QFX5100 switches acting as satellite devices in this topology. Both satellite software upgrade groups use the same software package to upgrade satellite software.

For a comprehensive configuration example of this procedure that includes all satellite software upgrade group configuration commands, see Appendix: Enterprise Data Center Solution Complete Configuration.

To install and manage the satellite software:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Copy the satellite software 3.0R1 image onto each QFX10002 switch acting as an aggregation device.

    File copying options are beyond the scope of this solutions guide. See Upgrading Software.

    These instructions assume a satellite software image has been installed to the var/tmp directory on each aggregation device.

  3. Create the satellite software upgrade groups and associate the FPC IDs with the groups:

    Aggregation Device 1:

  4. Commit the configuration:
  5. On each aggregation device, associate a satellite software image with each satellite software upgrade group:

    Aggregation Device 1:

    Aggregation Device 2:

    The satellite software upgrade starts at this point of the procedure. The satellite software upgrade can take several minutes per satellite device and is throttled, so satellite devices restart operations at different intervals.

    The satellite software upgrade group configurations can be verified later in this process, once the satellite devices are operational.

Preparing the Satellite Devices

Step-by-Step Procedure

To prepare the switches to become satellite devices, perform the following steps:

Note

These instructions assume each switch is already running Junos OS Release 14.1X53-D43 or later. See Installing Software on an EX Series Switch with a Single Routing Engine (CLI Procedure) for instructions on upgrading Junos OS software.

  1. Log into each switch’s console port, and zeroize it.Note

    Perform this procedure from the console port connection. A management connection will be lost when the switch is rebooted to complete the zeroizing procedure.

  2. (EX4300 switches only) After the switches reboot, convert the built-in 40-Gbps interfaces with QSFP+ transceivers from Virtual Chassis ports (VCPs) into network ports:

    The following sample output shows how to perform this procedure on each EX4300 switch acting as a satellite device.

    This step has to be performed on EX4300 switches only, since built-in 40-Gbps interfaces on EX4300 switches are set as Virtual Chassis ports (VCPs) by default. A Virtual Chassis port (VCP) cannot be converted into an uplink port on a satellite device in a Junos Fusion.

    This step is skipped for QFX5100 switches because the built-in 40-Gbps interfaces on QFX5100 switches are not configured into VCPs by default.

  3. Cable each switch into the Junos Fusion, if you haven’t already done so.

    Because automatic satellite conversion is enabled and the satellite software upgrade groups have been configured, the satellite software installation process starts for each satellite device when it is cabled to the aggregation device.

    Note

    If the satellite software installation does not begin, log onto the aggregation devices and ensure the configurations added in previous steps have been committed.

    The installation can take several minutes.

  4. Verify that the satellite software installation was successful:Note

    The show chassis satellite software command generates output only after the satellite software upgrades are complete. If you enter the show chassis satellite software command and no output is generated, consider re-entering the command in a few minutes.

    Aggregation device 1:

    Aggregation device 2:

  5. Confirm the satellite software upgrade groups configurations:

Verifying that the Junos Fusion Data Center is Operational

Purpose

Verify that the aggregation device recognizes all satellite devices, and that all satellite devices and cascade ports are online.

Action

Enter the show chassis satellite command:

The Alias and Slot outputs list all satellite devices and the Device State output confirms that each satellite device is online. These outputs confirm the satellite devices are recognized and operational.

The Cascade Ports output confirms the cascade port configuration on the aggregation device, and the Port State output confirms that the cascade ports are online. It includes the ICL interface as a backup port, since cascade port traffic may flow over the ICL if a cascade port link to a single aggregation device fails.

Configuring Uplink Port Pinning

Step-by-Step Procedure

Uplink port pinning is used to ensure all upstream traffic from a specified extended port on a satellite device is transported to the aggregation device over a specified uplink port.

When uplink port pinning is not configured on an extended port in a Junos Fusion, all traffic from the extended port is load balanced across all uplink interfaces when it is transported to the aggregation devices.

Uplink port pinning is useful in cases where you want to better manage upstream traffic to the aggregation devices. For instance, uplink port pinning can help in scenarios where the default load balancing of upstream traffic under-utilizes one of the upstream links by letting you direct all traffic from an extended port or ports to the under-utilized link. Uplink port pinning is also useful if you want to isolate traffic from an extended port or ports so that the traffic flows always receive identical treatment to the aggregation device.

In the Enterprise Data Center solution, uplink port pinning is enabled for extended port interfaces ge-162/0/47 and ge-163/0/47—port 47 on FPC ID 162 and FPC ID 163—to ensure all traffic received on these extended ports is transported to the aggregation device over uplink port 1/0 on their satellite devices.

Figure 8 illustrates traffic flow in the Enterprise Data Center solution before and after uplink port pinning is enabled.

See Configuring Uplink Port Pinning for Satellite Devices on a Junos Fusion Data Center for additional information on uplink port pinning.

To configure uplink port pinning:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Create a port group alias in a satellite policy to define the extended port on the satellite device whose traffic will be pinned to an uplink port:
  3. Create a port group alias in a satellite policy to define the uplink port on the satellite device that is pinned to the extended port:
  4. Create a forwarding policy that groups the port group alias definitions into a single policy.
  5. Associate the forwarding policy with the FPC ID numbers of the satellite devices.
  6. After committing the configuration, enter the show chassis satellite detail fpc-slot fpc-slot-id-number detail command to verify uplink port pinning operation.

    In the output below, uplink port pinning operation is confirmed for the satellite device using FPC slot 162.

    A configuration with uplink port pinning must be committed before this output is visible. No uplink port pinning information appears in the show chassis satellite detail fpc-slot fpc-slot-id-number detail command output when uplink port pinning is not enabled.

    You can repeat this procedure to enable uplink port pinning on other satellite devices in your network, per your networking requirements.

Enabling Uplink Failure Detection

Step-by-Step Procedure

The uplink failure detection feature (UFD) on a Junos Fusion enables satellite devices to detect link failures on the uplink interfaces used to connect to aggregation devices. When UFD detects that all uplink interfaces on a satellite device are down, all of the satellite device’s extended ports (which connect to host devices) are shut down. Shutting down the extended ports allows downstream host devices to more quickly identify and adapt to the outage. For example, when a host device is connected to two satellite devices and UFD shuts down the extended ports on one satellite device, the host device can more quickly recognize the uplink failure and redirect traffic through the other, active satellite device.

In the Enterprise Data Center solution, UFD is enabled for all satellite device uplink interfaces in the Junos Fusion Data Center topology.

For more information on UFD in a Junos Fusion, see Overview of Uplink Failure Detection on a Junos Fusion.

For information on other methods and options for configuring UFD in a Junos Fusion, see Configuring Uplink Failure Detection on a Junos Fusion.

To configure UFD for all uplink ports in the Junos Fusion Data Center topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable UFD with the default settings. By default, UFD is applied to all cascade port connections.

    The default UFD settings—apply UFD for all uplink ports on all satellite devices—are maintained in this configuration. See Overview of Uplink Failure Detection on a Junos Fusion for additional information on uplink port failure detection default settings. See Configuring Uplink Failure Detection on a Junos Fusion for other UFD configuration options.

  3. After committing the configuration, enter the show chassis satellite detail fpc-slot fpc-slot-id-number command to verify UFD operation and settings.

    In the output below, UFD operation is confirmed for the satellite device using FPC slot 100.

    A configuration with UFD must be committed before this output is visible. No UFD information appears in the show chassis satellite detail fpc-slot fpc-slot-id-number command output when UFD is not enabled.

Enabling an Aggregated Ethernet Interface For Access Interfaces

Step-by-Step Procedure

This procedure shows how to create an aggregated Ethernet interface composed of access interfaces. Access interfaces are the network-facing interfaces on the EX4300 and QFX5100 switches acting as satellite devices. Access interfaces on satellite devices in a Junos Fusion Data Center are also called extended ports.

An aggregated Ethernet interface is a collection of multiple links between physical interfaces that are bundled into one logical point-to-point link. An aggregated Ethernet interface is also commonly called a link aggregation group (LAG).

An aggregated Ethernet interface balances traffic across its member links within the aggregated Ethernet bundle and effectively increases the uplink bandwidth. Aggregated Ethernet interfaces also increase high availability, because an aggregated Ethernet interface is composed of multiple member links that can continue to carry traffic when one member link fails.

Link Aggregation Control Protocol (LACP) provides additional functionality for LAGs, including the ability to help prevent communication failures by detecting misconfigurations within a LAG.

In the Enterprise Data Center solution, aggregated Ethernet interfaces are configured using extended port member links to increase uplink bandwidth and high availability. These member links can be on extended port interfaces located on different satellite devices, and often should be to ensure high availability and load balancing for traffic to and from the endpoint device. These aggregated Ethernet interfaces also are configured to use LACP for link control.

Six total aggregated Ethernet interfaces composed of extended ports—each with two member links to interfaces on different satellite devices—are used in this reference topology. These step-by-step instructions show how to configure one aggregated Ethernet interface—ae1—first before providing the instructions for configuring the remaining aggregated Ethernet interfaces.

Figure 9 illustrates the aggregated Ethernet 1 interface configuration in this topology.

Figure 9: Aggregated Ethernet Interface Example (ae1)
Aggregated Ethernet Interface
Example (ae1)

For additional information on aggregated Ethernet interfaces and LACP, see Understanding Aggregated Ethernet Interfaces and LACP and Configuring Link Aggregation.

To configure an aggregated Ethernet interface with extended port member links that uses LACP in the Enterprise Data Center solution topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Set the maximum number of aggregated Ethernet interfaces permitted on the aggregation device switch.
    Note

    A device count must be set whenever an aggregated Ethernet interface is configured. Aggregated Ethernet interfaces are configured in other procedures in this document, and the aggregated Ethernet device count is set as part of those procedures. You can skip this step if the aggregated Ethernet device count has already been set.

    Note

    This approach can create multiple empty, unused aggregated Ethernet interfaces with globally unique MAC addresses on the aggregation device. You can simplify network administration by setting the device count to the number of aggregated Ethernet devices that you are using on your aggregation device.

    Note

    The defaults for minimum links and link speed are maintained for the aggregated Ethernet interfaces configured in this solution. There is no need to change the default link speed setting or the default minimum links setting. The default minimum links setting, which can be changed by entering the set interfaces aeX aggregated-ether-options minimum-links number-of-minimum-links, is 1.

  3. Create and name the aggregated Ethernet interface, and optionally assign a description to it:
  4. Assign interfaces to the aggregated Ethernet interface:
  5. Enable LACP for the aggregated Ethernet interface and set LACP into active mode:
  6. Set the interval at which the interfaces send LACP packets.

    The Enterprise Data Center solution sets the LACP periodic interval as fast, which sends an LACP packet every second.

  7. After the aggregated Ethernet configuration is committed, confirm that the aggregated Ethernet interface is enabled and that the physical link is up:
  8. After committing the configuration, confirm the LACP status is Active and that the receive state is Current for each link.

    The output below provides the status for interface ge-101/0/22.

    Repeat this step for each link in the aggregated Ethernet bundle.

  9. Repeat this procedure to configure each aggregated Ethernet interface in your implementation of the solution.

    Figure 10 illustrates all of the aggregated Ethernet access interfaces in the solution.

    Figure 10: Aggregated Ethernet Interfaces
    Aggregated Ethernet
Interfaces

    To configure the remaining five aggregated Ethernet interfaces:

Configuring IRB Interfaces and VLANs

Step-by-Step Procedure

Traffic is isolated and segmented at layer 2 in the Enterprise Data Center solution using VLANs. Traffic is moved between VLANs using IRB Interfaces on the aggregation devices.

A VLAN is a collection of LAN nodes grouped together to form an individual broadcast domain. VLANs segment traffic on a LAN into separate broadcast domains to limit the amount of traffic flowing across the entire LAN, reducing collisions and packet retransmissions. For instance, a VLAN can include all employees in a department and the resources that they use often, such as printers, servers, and so on. See Understanding Bridging and VLANs for additional information on VLANs.

IRB interfaces have multiple uses in this data center topology. Traffic that is forwarded from one endpoint device in the Junos Fusion Data Center to another endpoint device in a different VLAN in the same Junos Fusion Data Center uses the IRB interfaces to forward the traffic between the VLANs. IRB interfaces also move upstream traffic originating from an endpoint device to the MX480 core router.

The IRB interfaces are configured on the aggregation devices in this solution topology. An advantage of configuring IRB interfaces on the aggregation devices is that inter-VLAN traffic is processed more efficiently in the Enterprise Data Center because it doesn’t have to be passed to the MX router—a process that adds an upstream and a downstream hop—for processing.

This topology shows how to configure two VLANs, each with three member aggregated Ethernet interfaces that have links connecting to two satellite devices. The aggregated Ethernet interfaces were configured in the previous section. The IRB interfaces in this configuration move inter-VLAN traffic—traffic moving between VLAN 100 and VLAN 200—between the two VLANs.

Figure 11 illustrates the VLANs and IRB interface configuration used in this architecture.

Figure 11: IRB Interfaces and VLANs
IRB Interfaces and VLANs

For additional information on IRB interfaces, see Understanding Integrated Routing and Bridging.

To configure VLANs and IRB interfaces:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Configure the extended port aggregated Ethernet interfaces into VLANs:Note

    The aggregated Ethernet interfaces were configured in the previous section.

  3. Create the VLANs by naming and numbering them:
  4. Create the IRB interfaces and configure. Set an IPv4 and an IPv6 address for each IRB interface:Note

    The IP address for an IRB interface must match on both aggregation devices in this topology. Do not assign separate IP addresses for the same IRB interface on different aggregation devices.

    Note

    Although typically recommended, the unit number for an IRB interface is arbitrary and does not have to match the VLAN ID number. We have configured the unit number to match the VLAN ID number in this topology to avoid confusion.

  5. Bind the IRB interfaces to VLANs, and enable MAC synchronization for each VLAN;
  6. After committing the configuration, confirm the VLANs are created and are associated with the correct interfaces.

    The output below confirms the interfaces that belong to vlan100:

  7. After committing the configuration, confirm that the IRB interface is processing traffic by checking the Input packets and Output packets counters.

    The output below confirms irb.100:

Configuring OSPF

Step-by-Step Procedure

OSPF is a widely-adopted interior gateway protocol (IGP) that is used to route packets within a single area. OSPF is a mature, industry-standard routing protocol and the range of OSPF options is well beyond the scope of this document. For additional information on OSPF, see OSPF Feature Guide or OSPF Feature Guide for the QFX Series.

OSPF can be adopted as the routing protocol in the Enterprise Data Center solution. It can be used to exchange traffic with devices outside the Layer 2 topology presented in this solution architecture, such as non-data center devices in the Enterprise network, devices in a different data center, or devices that need to be reached over the Internet. Because the Enterprise Data Center solution is designed for private deployments where the Enterprise installing the data center also owns the upstream devices, an IGP using one autonomous system (AS) is often appropriate for the implementation.

OSPF is one routing protocol option for the Enterprise Data Center solution; BGP is another option. In general, OSPF is more appropriate in smaller scale environments with fewer routes and less need for routing policy control. In larger scale environments with more routes and more need for routing policy control, BGP is often the more appropriate routing protocol option. An Enterprise Data Center solution can run OSPF and BGP simultaneously in large scale setups or in scenarios where an IGP and an EGP are required.

In the Enterprise Data Center solution, OSPF is configured in a virtual routing instance (vr-10). Layer 3 multicast is also configured in this virtual routing instance. The MX480 router and the two QFX10002 switches all place interfaces into the OSPF backbone area (area 0).

Note

Multiple routing instances are configured in this topology over the same interfaces. Only one virtual routing instance is supported per interface. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your networking requirements.

Figure 12 illustrates the OSPF topology in this solution.

Figure 12: OSPF Topology
OSPF Topology

This configuration procedure shows how to enable OSPF on the devices in the Enterprise Data Center solution topology only. The purpose of OSPF is to enable connectivity to devices outside the data center, so the devices outside the data center topology must also enable OSPF support. The process for enabling OSPF on those devices is beyond the scope of this guide.

To configure OSPF for the Enterprise Data Center solution:

  1. Configure the virtual routing instance on the MX480 router and both QFX10002 switches:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  2. Configure the IP address of the loopback interface:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  3. Configure a loopback interface into the routing instance on each device:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  4. Assign a router ID to each device participating in the OSPF network:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

    Note

    We recommend configuring the router ID as the IP address of the loopback addresses to simplify network management. The router ID can be any value and does not have to match the IP address of the loopback address.

  5. Configure interfaces into OSPF.

    The loopback interface is configured into OSPF as part of the procedure, and is enabled as a passive interface on each device.

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  6. After committing the configuration, verify that the OSPF state is full for all neighbor routers:

    MX480 Router:

    Many other verification commands are available for OSPF. See OSPF Feature Guide.

    This configuration procedure shows how to enable OSPF on the devices in the Enterprise Data Center solution topology only. The purpose of OSPF is to enable connectivity to devices outside the data center, so the devices outside the data center topology must also configure OSPF support. The OSPF configuration of those devices is beyond the scope of this guide.

Configuring BGP

Step-by-Step Procedure

BGP is a widely-adopted exterior gateway protocol (EGP) that is used to route packets between autonomous systems (ASs). The range of eBGP options and behaviors are well beyond the scope of this document. For additional information on BGP, see the BGP Feature Guide.

The Enterprise Data Center solution can use external BGP (EBGP) to exchange traffic with devices outside the Layer 2 topology presented in this solution architecture, such as non-data center devices in the Enterprise network, devices in a different data center, or devices that need to be reached over the Internet.

EBGP is one routing protocol option for the Enterprise Data Center solution; OSPF is another option. In general, EBGP is often the more appropriate routing protocol option in larger scale environments with more routes and more need for routing policy control. OSPF is often the more appropriate routing protocol option in smaller scale environments with fewer routes and less need for routing policy control. One routing protocol is needed in most topologies, although this Enterprise Data Center solution can run OSPF and BGP simultaneously in large scale setups or in scenarios where both an IGP and an EGP are required.

In the Enterprise Data Center solution, EBGP is configured in a virtual routing instance. The QFX10002 switches in the topology are in AS 64500 and the MX480 router is in AS 64501. The MX480 router is an EBGP peer to each QFX10002 switch.

BGP is configured in virtual routing instance 20 (vr-20) on both QFX10002 switches and the MX480 core router. In addition to BGP, DHCP Relay is also running in the virtual routing instance. DHCP Relay configuration is covered in Configuring DHCP Relay.

Note

Multiple routing instances are configured in this topology over the same interfaces. Only one virtual routing instance is supported per interface. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your networking requirements.

Figure 13 illustrates the BGP topology in this solution.

Figure 13: EBGP Topology
EBGP Topology

This configuration procedure shows how to enable EBGP on the devices in the Enterprise Data Center solution topology only. The purpose of EBGP is to enable connectivity to devices outside the data center, so the devices outside the data center topology must also enable EBGP support. The procedure to enable EBGP on those devices is beyond the scope of this guide.

To configure this EBGP implementation:

  1. Obtain and download a BGP license for each QFX10002-72Q switch in this topology.

    For information about how to purchase software licenses, contact your Juniper Networks sales representative.

    To download a new license, see Adding New Licenses (CLI Procedure).

    A license is required to run BGP on the QFX10002-72Q switches in this topology only. A license is not required to run BGP on the MX480 Router used in this topology. See Software Feature Licenses

  2. Configure the virtual routing instance on the MX480 router and both QFX10002 switches:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  3. Add the interfaces on each device that are participating in the virtual routing instance:

    MX480 Router

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  4. Create the EBGP group and specify the type, peer AS, local AS, and neighbor device parameter for all devices:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  5. After the configurations are committed on the MX480 router and both QFX10002 switches, confirm the BGP neighbor relationships have formed by entering the show bgp neighbor instance command on any device.

    The sample below provides this output for the QFX10002 switch acting as aggregation device 1:

    The output confirms the correct BGP group and virtual routing instance, and that BGP traffic is being sent and received on the switch.

  6. After the configurations are committed on the MX480 router and both QFX10002 switches, confirm that the BGP state is established and that BGP traffic is being sent and received on the device by entering the show bgp summary group ebgp-20

    The output confirms that the BGP state is established and that input and output packets are being sent and received.

Configuring Class of Service

Step-by-Step Procedure

Class of service (CoS) enables you to divide traffic into classes and set various levels of throughput and packet loss when congestion occurs. You have greater control over packet loss because you can configure rules tailored to your needs.

For additional information on CoS in a Junos Fusion Data Center, see Understanding CoS in Junos Fusion Data Center.

In the Enterprise Data Center solution, one classifier with four output queues is created to manage incoming traffic congestion from the servers connected to access interfaces. Each output queue has it’s own low, medium-high, and high loss priority flows to manage traffic in the event of congestion. The classifier is attached to the aggregated Ethernet interfaces that connect the server to the extended ports—the access interfaces—on the satellite devices.

This configuration procedure shows how to configure the classifier only. The configuration of service levels is not covered.

The CoS classifier used in this Solutions Guide is simple and provides an illustration of how a CoS classifier may be configured in an Enterprise Data Center. Juniper Networks offer many CoS configuration options for it’s data center products, and covering all of them is beyond the scope of this Solutions Guide. For information on other CoS configuration options, see Configuring CoS in Junos Fusion Data Center.

To configure the CoS classifier for the Enterprise Data Center solution:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Configure all four forwarding classes, setting the loss priorities for each class:

    Aggregation Device 1 or 2:

  3. Assign each forwarding class to a queue number:

    Aggregation Device 1 or 2:

  4. Assign the classifiers to the aggregated Ethernet interfaces:
  5. This configuration procedure shows how to configure the classifier only. The configuration of service levels is not covered.

    For information on other CoS configuration options, see Configuring CoS in Junos Fusion Data Center.

Configuring DHCP Relay

Step-by-Step Procedure

You can configure a Junos Fusion Data Center to act as a Dynamic Host Configuration Protocol (DHCP) relay agent. This means that if a Junos Fusion Data Center receives a broadcast DHCP request from a locally attached host (client), it relays the message to the specified DHCP server.

For additional information on DHCP Relay, see DHCP and BOOTP Relay Overview.

In the Enterprise Data Center solution, DHCP Relay is enabled in a virtual routing instance to relay DHCP requests that originate from hosts in the routing instance to the DHCP server or servers in the server group. Both the server and the host in this configuration are attached to extended port interfaces—the access interfaces on the QFX5100 and EX4300 switches acting as satellite devices—so the DHCP request is relayed across the Junos Fusion Data Center topology.

Note

Multiple routing instances are configured in this topology over the same interfaces. Only one virtual routing instance is supported per interface. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your networking requirements.

To enable the DHCP Relay configuration for the Enterprise Data Center solution:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Configure the routing instance, name the server group, and specify the IP address of the DHCP server by configuring the DHCP Relay server group.
  3. Create and name a client group within the active server group. Configure the DHCP Relay server group as the active server group for the client group:
  4. Associate the client group with the virtual routing instance.
  5. Associate the client group with an IRB interface.
  6. After committing the configuration, confirm that DHCP Relay packets are being sent and received:

Configuring Layer 3 Multicast

Step-by-Step Procedure

Multicast traffic is traffic that is sent from one source to many receivers. See Multicast Overview for additional information on multicast.

Layer 3 multicast is enabled in the Enterprise Data Center solution within a virtual routing instance. The virtual routing instance—vr-10—was used earlier in this guide to enable OSPF. This procedure assumes the virtual routing instance and loopback address were created as part of the OSPF configuration procedure. See the Configuring OSPF for the steps required to configure the virtual routing instance and the loopback address, if needed.

The topology in the solution implements multicast using Protocol Independent Multicast sparse-mode (PIM-SM). The MX480 router acts as the rendezvous point (RP) in the PIM-SM configuration. All interfaces on the MX480 router and both QFX10002 switches are enabled to support PIM-SM.

Note

Multiple routing instances are configured in this topology over the same interfaces. Only one virtual routing instance is supported per interface. In your deployment, create one virtual routing instance that includes the combination of OSPF, EBGP, DHCP Relay, and PIM-SM that is appropriate for your networking requirements.

See Understanding PIM Sparse Mode for additional information on PIM-SM.

To enable Layer 3 Multicast in a virtual routing instance for the Enterprise Data Center solution:

  1. Configure the virtual routing instance on the MX480 router and both QFX10002 switches:

    MX480 Router:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

  2. Configure the MX480 router as the rendezvous point (RP), and enable PIM-SM on all interfaces on the MX480 router in the routing instance:

    MX480 Router:

  3. Configure the non-RP devices, which are both QFX10002 switches in this topology:

    QFX10002 Switch (Aggregation Device 1):

    QFX10002 Switch (Aggregation Device 2):

    Note

    This configuration assumes the interfaces for the virtual routing instance and the loopback addresses are already created. See the Configuring OSPF for the steps required to configure the virtual routing instance and the loopback address.

    All interfaces in the virtual routing instance participate in the PIM-SM topology once this configuration is committed.

  4. After committing the configuration, confirm that PIM is operational.

    QFX10002 Switch (Aggregation Device 1):

Configuring IGMP Snooping to Manage Multicast Flooding on VLANs

Step-by-Step Procedure

Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic on a VLAN by monitoring IGMP messages and only forwarding multicast traffic to interested receivers. For more information on IGMP snooping, see Configuring IGMP Snooping (CLI Procedure).

In the Enterprise Data Center Solution. IGMP snooping is enabled to constrain IPv4 multicast traffic flooding in the Layer 2 VLANs when PIM is enabled. The VLANs include the aggregated Ethernet interface on each QFX10002 switch connecting to the MX480 router (the multicast router interface) as well as multiple access interfaces that connect to the topology using the extended ports on the satellite devices.

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. On the aggregation devices, enable IGMP snooping and configure an interface in the VLAN as a static multicast router interface:

    QFX10002 Switch (Aggregation Device 1 or 2):

  3. Enable IGMP snooping on access interfaces in the VLAN:
  4. After committing the configuration, confirm that IGMP snooping is enabled:

Configuring VLAN Autosense

Step-by-Step Procedure

VLAN autosense gives extended ports in a Junos Fusion Data Center—the access interfaces on the satellite devices—the ability to add themselves to a VLAN in cases when traffic is traversing the interface that belongs to a VLAN that is not currently assigned to the interface. For instance, if extended port ge-101/0/1 was not part of VLAN 102 but received traffic destined for VLAN 102, port ge-101/0/1 would automatically add itself as a member of VLAN 102 if VLAN autosense was enabled.

VLAN autosense is enabled on interface ae1 in the Enterprise Data Center topology.

To enable VLAN autosense for the Enterprise Data Center topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable VLAN autosense:

Configuring Layer 2 Loop Detection and Prevention for Extended Ports in a Junos Fusion

Step-by-Step Procedure

Loop detection is a lightweight Layer 2 protocol that can be enabled on all extended ports—in this topology, the extended ports are the access ports on the EX4300 and QFX5100 satellite devices—in a Junos Fusion.

When loop detection is enabled on an extended port, the port periodically transmits a Layer 2 multicast packet with a user-defined MAC address. If the packet is received on an extended port interface in the Junos Fusion topology, the ingress interface is logically shut down and a loop detect error is flagged. If a loop is created between two extended ports, both interfaces receive the packets transmitted from the other interface, and both ports are shut down. Manual intervention is required to bring the interfaces back online.

Loop detection is useful for detecting accidental loops caused by faulty wiring or by VLAN configuration errors. Loop detection is useful in this solution for detecting these and other errors in a low overhead manner, since loop detection and prevention only requires the periodic transmission of a small packet for operation and not the full overhead of other loop detection protocols like STP.

See Understanding Loop Detection and Prevention on a Junos Fusion and Configuring Loop Detection in a Junos Fusion for additional overview and configuration information on loop detection and prevention in a Junos Fusion topology.

In the Enterprise Data Center topology, loop detection is enabled on all extended ports and a loop detection packet is transmitted at the default interval of every 30 seconds.

To enable loop detection for the Enterprise Data Center topology:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable loop detection on all extended ports:

    Aggregation Device 1 or 2:

  3. Specify the MAC address to use in the loop detection packet:

    Aggregation Device 1 or 2:

  4. After committing the configuration, confirm that loop detection is enabled:

Configuring LLDP

Step-by-Step Procedure

Juniper Networks devices use Link Layer Discovery Protocol (LLDP) to learn and distribute device information on network links. The information allows a Juniper Networks device to quickly identify a variety of devices, resulting in a LAN that interoperates smoothly and efficiently.

In the Enterprise Data Center solution architecture, LLDP is enabled on all satellite device and aggregation device interfaces.

To configure LLDP for the Enterprise Data Center solution:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable LLDP on all extended port interfaces:

    Aggregation Device 1 or 2:

  3. After committing the configuration, enter the show lldp command to confirm that LLDP is enabled:

Configuring a Firewall Filter

Step-by-Step Procedure

Firewall filters provide rules that define whether to accept or discard packets that are transiting an interface or VLAN, as well as actions to perform on packets that are accepted on the interface or VLAN.

Comprehensive coverage of firewall filter implementation options and behaviors is beyond the scope of this document. For additional information on firewall filters, see Overview of Firewall Filters.

In the Enterprise Data Center solution topology, a simple firewall filter used to count packets from a specific MAC address received in VLAN 100 are counted.

For information on other firewall filter configuration options, see Configuring Firewall Filters.

To configure this basic firewall filter:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Create a firewall filter match condition to identify traffic.

    In this topology, all traffic from source MAC address 00:00:5E:00:53:00 is accepted and counted.

    Aggregation Device 1 or 2:

  3. Specify the action to take on matching traffic.

    In this topology, matching traffic is accepted and counted:

    Aggregation Device 1 or 2:

  4. Apply the filter to a VLAN:

    Aggregation Device 1 or 2:

  5. After committing the configuration, verify that the firewall filter is accepting and counting traffic.

    The firewall filter counters in the show firewall output only display firewall filter statistics from one of the aggregation devices due to how traffic is load balanced in a Junos Fusion topology. The other aggregation device always displays 0 bytes and 0 packets filtered by the firewall.

    In the output below, the firewall filter statistics in the show firewall output are visible from aggregation device 2 only.

    Aggregation Device 1:

    Aggregation Device 2:

Configuring SNMP

Step-by-Step Procedure

SNMP enables the monitoring of network devices from a central location using a network management system (NMS). For additional information on SNMP, see Understanding the Implementation of SNMP.

This document shows how to enable SNMP on the aggregation devices in the Enterprise Data Center solution only. The solution supports SNMP version 2 (SNMPv2). A complete SNMP implementation that includes selection and configuration of the NMS is beyond the scope of this document. See Configuring SNMP.

To configure SNMP from the aggregation devices:

  1. Create the configuration group and ensure the configuration group is applied on both aggregation devices:

    Aggregation Device 1:

    Aggregation Device 2:

    This procedure assumes commitment synchronization is configured. See Configuring Commit Synchronization Between Aggregation Devices.

  2. Enable SNMP:

    Aggregation Device 1 or 2:

  3. Specify the physical location of the system:

    Aggregation Device 1 or 2:

  4. Specify an administrative contact for the SNMP system:

    Aggregation Device 1 or 2:

  5. Specify an SNMP interface:

    Aggregation Device 1 or 2:

  6. Specify an SNMP community name for the read-only authorization level.

    Aggregation Device 1 or 2:

  7. Specify an SNMP community name for the read-write authorization level.

    Aggregation Device 1 or 2:

  8. Configure a trap group and a target to receive the SNMP traps.

    Aggregation Device 1 or 2:

  9. After committing the configuration, confirm SNMP messages are being transmitted and received: