Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

MC-LAG Examples

Example: Configuring Multichassis Link Aggregation Between QFX Series Switches and MX Series Routers

This example shows how to configure multichassis link aggregation groups (MC-LAGs) between a QFX Series switch and a MX Series router using active-active mode to support Layer 2 bridging. In active-active mode all member links carry traffic allowing traffic to be load balanced to both MC-LAG peers.

Requirements

This example uses the following hardware and software components:

  • One Juniper Networks MX Series router (MX240, MX480, MX960)

  • One Juniper Networks QFX Series switch (QFX10000, QFX5110, QFX5120)

  • Two servers with LAG support; MX Series routers fill the server role in this example

  • Junos OS Release 19.4R1 or later on the MC-LAG peers

Overview

In the example topology two servers are connected to two provider edge (PE) devices, S0 and R1. S0 is a QFX Series switch while R1 is a MX Series router. Both PE devices have link aggregation groups (LAGs) connected to both servers. This example configures active-active mode for the MC-LAGs, meaning that both PE devices’ LAG ports are active and carrying traffic at the same time.

The servers are not aware that their aggregated Ethernet links are connected to multiple PE devices. MC-LAG operation is opaque to the servers and both have a conventional Ethernet LAG interface configured.

On one end of an MC-LAG is an MC-LAG client device, for example, a server or switching/routing device, that has one or more physical links in a LAG. The client devices do not need to support MC-LAG as these devices only need to support a standard LAG interface. On the other side of the MC-LAG are two MC-LAG devices (PEs). Each of the PEs has one or more physical links connected to the client device. The PE devices coordinate with each other to ensure that data traffic is forwarded properly even when all client links are actively forwarding traffic.

In Figure 3, the servers operate as if both LAG members were connected to a single provider device. Because the configured mode is active-active, all LAG members are in a forwarding state and the CE device load-balances the traffic to the peering PE devices.

The Interchassis Control Protocol (ICCP) sends messages between the PE devices to control the forwarding state of the MC-LAG. In addition, an interchassis link-protection link (ICL-PL) is used to forward traffic between the PE devices as needed when operating in active-active mode.

In this example you configure two MC-LAG on the PEs to support Layer 2 connectivity between the aggregated Ethernet interfaces on the servers. As part of the MC-LAG configuration you provision an aggregated Ethernet interface between the MC-LAG peers to support the ICL-PL and ICCP functionality.

Topology Diagram

Figure 3: QFX to MX MC-LAG InteroperabilityQFX to MX MC-LAG Interoperability

Figure 3 shows the topology used in this example.

Key points about the topology include:

  1. The S0 node is a QFX10000 Series switch while the R1 node is a MX960 Series router.
  2. MX Series routers are used to fill the role of the 2 servers. Any switch, router, or server device that supports a conventional LACP based LAG interface can be used in this example.
  3. The servers are assigned VLAN 10 and have a shared subnet. You expect Layer 2 connectivity between the servers.
  4. The ICCP session between the PEs is anchored to an IRB interface. This is akin to BGP peering between loopback interfaces to survive link failures. However, here the IRBs are placed in a shared VLAN (VLAN 100) that provides Layer 2 connectivity between the PEs. This means that an IGP or static route is not needed for connectivity between the IRBs. As a result the IRBs share an IP subnet.
  5. This example deploys a single LAG interface between the PEs (ae0) to support both the ICCP and ICL functionality. If desired you can run ICCP over a separate AE bundle. The use of multiple members in the AE bundle used for the ICCP/ICL links is highly recommended to ensure that they remain operational in the event of individual interface or link failures.
  6. While largely similar, the MC-LAG configuration differs slightly between the PE devices given they are different platforms. Demonstrating these configuration differences, and MC-LAG interoperability between the platforms, is the reason for this example. Be sure to keep track of which PE you are interacting with as you proceed through the example.

Configure the Devices

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level. When done enter commit from configuration mode to activate the changes.

Switch S0

Note:

In this example the S0 device is a QFX10000 Series switch.

Router R1

Note:

In this example the R1 device is a MX Series router.

Server 1

Note:

The servers in this example are MX routers. While this example focuses on configuring MC-LAG on the PE devices, the server configuration is provided for completeness. In this example server 2 has the same configuration, with the exception that it is assigned IPv4 address 172.16.1.2/24 and IPv6 address 2001:db8:172:16:1::2 .

Configure the S0 Switch

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure Switch S0:

  1. Specify the number of aggregated Ethernet devices supported on the chassis. Only 3 LAGs are needed for the example, but having unused AE bundle capacity causes no issues.

  2. Configure the loopback (if desired, it's not used in this example), and IRB interfaces, along with the IRB interface's VLAN. In this example the IRB interface is used to anchor the ICCP session and is assigned to VLAN 100.

  3. Configure the ae0 interface to support ICCP and ICL. Be sure to include all MC-LAG VLANs, as well as the IRB VLAN used to support ICCP. You can specify a list of VLANs, but in this example the all keyword is used to quickly ensure all VLANs are supported over the ae0 interface. In this example only two VLANS are required on the ISL. The MC-LAG VLAN (10) and VLAN 100 that supports ICCP.

    For proper operation unit 0 must be used for the ICL link on the QFX Series switch because, unlike an MX Series router, they don't support unit-level specification of the ICL link.

    Note:

    The QFX Series switch supports only interface level specification of the ICL link and assumes the use of unit 0. Its is therefore important that you list all MC-LAG VLANs under unit 0 as is shown. The MX Series router can support both global or unit level specification of the ICL. The latter method is shown later in this example.

  4. Specify the member interfaces used for the server facing aggregated Ethernet bundles.

  5. Configure the LACP and MC-LAG parameters for the MC-LAG that connects to server 1 (ae10). The MC-LAG is set for active-active mode and, in this example, S0 is set to be the active MC-LAG node using the status-control active statement. If S0 fails R1 will take over as the active node. The chassis-id statement is used by LACP for calculating the port number of the MC-LAG's physical member links. By convention the active node is assigned a chassis ID of 0 while the standby node is assigned 1. In a later step you configure R1 to be the active node for the MC-LAG connected to server 2.

    The multichassis aggregated Ethernet identification number (mc-ae-id) specifies which link aggregation group the aggregated Ethernet interface belongs to. The ae10 interfaces on S0 and R1 are configured with mc-ae-id 10. In like fashion the ae20 interface is configured with mc-ae-id 20 .

    The redundancy-group 1 statement is used by ICCP to associate multiple chassis that perform similar redundancy functions and to establish a communication channel so that applications on peering chassis can send messages to each other. The ae10 and ae20 interfaces on S0 and R1 are configured with the same redundancy group, redundancy-group 1.

    The mode statement indicates whether an MC-LAG is in active-standby mode or active-active mode. Chassis that are in the same group must be in the same mode.

  6. Configure the LACP and MC-LAG parameters for the MC-LAG that connects to server 2 (ae20). The MC-LAG is set for active-active mode and, in this example, S0 is set to be the standby MC-LAG node. In the event of R1 failure S0 takes over as the active node.

  7. Configure the VLAN for the AE 10 and AE 20 bundles.

  8. Configure the switch-options service ID.

    The ports within a bridge domain share the same flooding or broadcast characteristics in order to perform Layer 2 bridging.

    The global service-id statement is required to link related bridge domains across peers (in this case S0 and R1), and must be configured with the same value.

  9. Configure the ICCP parameters. The local and peer parameters are set to reflect the values configured previously for the local and remote IRB interfaces, respectively. Configuring ICCP peering to an IRB (or loopback) interface ensures that the ICCP session can remain up in the face of individual link failures.

  10. Configure the service ID at the global level. You must configure the same unique network-wide service ID in the set of PE routers providing the service. This service ID is required when the multichassis aggregated Ethernet interfaces are part of a bridge domain.

  11. Configure the ae0 interface to function as the ICL for the MC-LAG bundles supported by S0.

    Note:

    On the QFX platform you must specify a physical interface device as the ICL protection link. Logical unit level mapping of an ICL to a MC-LAG bundle is not supported. For proper operation you must ensure that unit 0 is used to support the bridging of the MC-LAG VLANs on the ICL.

S0 Results

From configuration mode, confirm your configuration by entering the show command. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Configure the R1 Router

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure Router R1:

  1. Specify the number of aggregated Ethernet interfaces to be created on the chassis. Only 3 LAGs are needed, but having additional LAG capacity causes no issue.

  2. Configure the loopback (if desired, it's not needed in this example) and IRB interfaces, along with IRB interface's VLAN. In this example the IRB interface is used to anchor the ICCP session.

  3. Configure the ae0 interface to support both ICL and ICCP functionality. A vlan-id-list is used to support a range of VLANs that include VLAN 100 for ICCP and VLAN 10 for the MC-LAGs. Unlike the QFX Series switch, the all used as a shortcut to support all VLANs is not supported on MX Series routers.

    Note:

    The ICL link must support all MC-LAG VLANs as well as the VLAN used for ICCP. In this example this means at a minimum you must list VLAN 10 and VLAN 100 given the ae0 link supports both ISL and ICCP in this example.

  4. Specify the members to be included within the server facing aggregated Ethernet bundles at R0.

  5. Configure the LACP and MC-LAG parameters for the MC-LAG that connects to server 1 (ae10). The MC-LAG is set for active-active mode and, in this example, R1 is set to be the standby MC-LAG node using the status-control standby statement. This makes S0 the active MC-LAG node for ae10 when it's operational. If S0 fails R1 takes over as the active node. The chassis-id statement is used by LACP for calculating the port number of the MC-LAG's physical member links. By convention the active node is assigned chassis ID of 0 while the standby node is assigned 1.

    The multichassis aggregated Ethernet identification number (mc-ae-id ) specifies which link aggregation group the aggregated Ethernet interface belongs to. The ae10 interfaces on S0 and R1 are configured with mc-ae-id 10. In like fashion the ae20 interface is configured with mc-ae-id 20.

    The redundancy-group 1 statement is used by ICCP to associate multiple chassis that perform similar redundancy functions and to establish a communication channel so that applications on peering chassis can send messages to each other. The ae10 and ae20 interfaces on S0 and R1 are configured with the same redundancy group, redundancy-group 1.

    The mode statement indicates whether an MC-LAG is in active-standby mode or active-active mode. Chassis that are in the same group must be in the same mode.

    This example demonstrates MX Series router support for the specification of the ICL interface at the unit level (under the MC-LAG unit as shown below). If desired the ICL protection link can be specified globally at the physical device level (with unit 0 assumed) at the [edit multi-chassis multi-chassis-protection] hierarchy, as was shown for the QFX Series switch S0.

    Note:

    On the MX platform you can specify the ICL interface using either a global level physical device declaration at the edit multi-chassis multi-chassis-protection hierarchy, or as shown here, at the logical unit level within the MC-LAG bundle. QFX Series switches support only global level specification of the physical device.

  6. Configure the LACP and MC-LAG parameters for the MC-LAG that connects to server 2 (ae20). The MC-LAG is set for active-active mode and, In this example, R1 is set to be the active MC-LAG node. In the event of R1 failure S0 takes over as the active node for the ae20 MC-LAG.

  7. Configure the VLAN for the ae10 and ae20 bundles.

    Note:

    On the MX Series router you define VLANs under the [edit bridge-domains] hierarchy. On the WFX Series switch this is done at the [edit vlans] hierarchy. This is one of the differences between the QFX Series switch and the MX Series router.

  8. Configure the switch-options service ID.

    The ports within a bridge domain share the same flooding or broadcast characteristics in order to perform Layer 2 bridging.

    The global service-id statement is required to link related bridge domains across peers (in this case S0 and R1), and must be configured with the same value.

  9. Configure the ICCP parameters. The local and peer parameters are set to reflect the values configured previously on the local and remote IRB interfaces, respectively. Configuring ICCP peering to an IRB (or loopback) interface ensures that the ICCP session can remain up in the face of individual link failures.

  10. Configure the service ID at the global level. You must configure the same unique network-wide configuration for a service in the set of PE devices providing the service. This service ID is required if the multichassis aggregated Ethernet interfaces are part of a bridge domain.

R1 Results

From configuration mode, confirm your configuration by entering the show command. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Verification

Confirm that the configuration is working properly by running the following operational mode commands:

  • show iccp

  • show interfaces mc-ae

  • show interfaces aeX (0, 10, and 20)

  • On the QFX Series switch use the show vlans and the show ethernet-switching table commands

  • On the MX Series router use theshow bridge mac-table command

  • Verify Layer 2 connectivity between the servers

Select verification commands are run to show the expected output. We start with the show iccp command on S0. If the ICCP session is not established issue the ping command between the IRB interfaces to ensure the expected Layer 2 connectivity over the ae0 ICCP/ICL link:

Next, we run the show interfaces mc-ae extensive command on S0. The output confirms the expected active-active status and status control active/standby state for both MC-LAGs. Recall that S0 is the status control active node for ae10 and the standby node for ae20 in this example:

The show interfaces command is used to confirm the ICCP/ICL, and MC-LAG bundles are up. For brevity only the output for the ae10 bundle is shown. All AE interfaces (ae0, ae10, and ae20) should be up:

The show vlans detail and show ethernet-switching table commands are used to confirm VLAN definition and mapping for the ICCP/ICL, and MC-LAG interfaces on the S0 device:

Lastly, you ping between server 1 and 2 to confirm Layer 2 connectivity:

Example: Configuring CoS for FCoE Transit Switch Traffic Across an MC-LAG

Multichassis link aggregation groups (MC-LAGs) provide redundancy and load balancing between two switches, multihoming support for client devices such as servers, and a loop-free Layer 2 network without running Spanning Tree Protocol (STP).

Note:

This example uses Junos OS without support for the Enhanced Layer 2 Software (ELS) configuration style. If your switch runs software that does support ELS, see Example: Configuring CoS Using ELS for FCoE Transit Switch Traffic Across an MC-LAG. For ELS details, see Using the Enhanced Layer 2 Software CLI.

You can use an MC-LAG to provide a redundant aggregation layer for Fibre Channel over Ethernet (FCoE) traffic in an inverted-U topology. To support lossless transport of FCoE traffic across an MC-LAG, you must configure the appropriate class of service (CoS) on both of the switches with MC-LAG port members. The CoS configuration must be the same on both of the MC-LAG switches because an MC-LAG does not carry forwarding class and IEEE 802.1p priority information.

Note:

This example describes how to configure CoS to provide lossless transport for FCoE traffic across an MC-LAG that connects two switches. It also describes how to configure CoS on the FCoE transit switches that connect FCoE hosts to the two switches that form the MC-LAG.

This example does not describe how to configure the MC-LAG itself. However, this example includes a subset of MC-LAG configuration that only shows how to configure interface membership in the MC-LAG.

Ports that are part of an FCoE-FC gateway configuration (a virtual FCoE-FC gateway fabric) do not support MC-LAGs. Ports that are members of an MC-LAG act as FCoE pass-through transit switch ports.

QFX Series switches and EX4600 switches support MC-LAGs. QFabric system Node devices do not support MC-LAGs.

Requirements

This example uses the following hardware and software components:

  • Two Juniper Networks QFX3500 switches that form an MC-LAG for FCoE traffic.

  • Two Juniper Networks QFX3500 switches that provide FCoE server access in transit switch mode and that connect to the MC-LAG switches. These switches can be standalone QFX3500 switches or they can be Node devices in a QFabric system.

  • FCoE servers (or other FCoE hosts) connected to the transit switches.

  • Junos OS Release 12.2 or later for the QFX Series.

Overview

FCoE traffic requires lossless transport. This example shows you how to:

  • Configure CoS for FCoE traffic on the two QFX3500 switches that form the MC-LAG, including priority-based flow control (PFC) and enhanced transmission selection (ETS; hierarchical scheduling of resources for the FCoE forwarding class priority and for the forwarding class set priority group).

    Note:

    Configuring or changing PFC on an interface blocks the entire port until the PFC change is completed. After a PFC change is completed, the port is unblocked and traffic resumes. Blocking the port stops ingress and egress traffic, and causes packet loss on all queues on the port until the port is unblocked.

  • Configure CoS for FCoE on the two FCoE transit switches that connect FCoE hosts to the MC-LAG switches and enable FIP snooping on the FCoE VLAN at the FCoE transit switch access ports.

  • Disable IGMP snooping on the FCoE VLAN.

    Note:

    This is only necessary if IGMP snooping is enabled on the VLAN. Before Junos OS Release 13.2, IGMP snooping was enabled by default on VLANs. Beginning with Junos OS Release 13.2, IGMP snooping is enabled by default only on the default VLAN.

  • Configure the appropriate port mode, MTU, and FCoE trusted or untrusted state for each interface to support lossless FCoE transport.

Topology

Switches that act as transit switches support MC-LAGs for FCoE traffic in an inverted-U network topology, as shown in Figure 6.

Figure 6: Supported Topology for an MC-LAG on an FCoE Transit SwitchSupported Topology for an MC-LAG on an FCoE Transit Switch

Table 3 shows the configuration components for this example.

Table 3: Components of the CoS for FCoE Traffic Across an MC-LAG Configuration Topology

Component

Settings

Hardware

Four QFX3500 switches (two to form the MC-LAG as pass-through transit switches and two transit switches for FCoE access).

Forwarding class (all switches)

Default fcoe forwarding class.

Classifier (forwarding class mapping of incoming traffic to IEEE priority)

Default IEEE 802.1p trusted classifier on all FCoE interfaces.

LAGs and MC-LAG

S1—Ports xe-0/0/10 and x-0/0/11 are members of LAG ae0, which connects Switch S1 to Switch S2.Ports xe-0/0/20 and xe-0/0/21 are members of MC-LAG ae1.All ports are configured in trunk port mode, as fcoe-trusted, and with an MTU of 2180.

S2—Ports xe-0/0/10 and x-0/0/11 are members of LAG ae0, which connects Switch S2 to Switch S1.Ports xe-0/0/20 and xe-0/0/21 are members of MC-LAG ae1.All ports are configured in trunk port mode, as fcoe-trusted, and with an MTU of 2180.

Note:

Ports xe-0/0/20 and xe-0/0/21 on Switches S1 and S2 are the members of the MC-LAG.

TS1—Ports xe-0/0/25 and x-0/0/26 are members of LAG ae1, configured in trunk port mode, as fcoe-trusted, and with an MTU of 2180.Ports xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33 are configured in tagged-access port mode, with an MTU of 2180.

TS2—Ports xe-0/0/25 and x-0/0/26 are members of LAG ae1, configured in trunk port mode, as fcoe-trusted, and with an MTU of 2180.Ports xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33 are configured in tagged-access port mode, with an MTU of 2180.

FCoE queue scheduler (all switches)

fcoe-sched:Minimum bandwidth 3gMaximum bandwidth 100%Priority low

Forwarding class-to-scheduler mapping (all switches)

Scheduler map fcoe-map:Forwarding class fcoeScheduler fcoe-sched

Forwarding class set (FCoE priority group, all switches)

fcoe-pg: Forwarding class fcoe

Egress interfaces:

  • S1—LAG ae0 and MC-LAG ae1

  • S2—LAG ae0 and MC-LAG ae1

  • TS1—LAG ae1, interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33

  • TS2—LAG ae1, interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33

Traffic control profile (all switches)

fcoe-tcp: Scheduler map fcoe-mapMinimum bandwidth 3gMaximum bandwidth 100%

PFC congestion notification profile (all switches)

fcoe-cnp:Code point 011

Ingress interfaces:

  • S1—LAG ae0 and MC-LAG ae1

  • S2—LAG ae0 and MC-LAG ae1

  • TS1—LAG ae1, interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33

  • TS2—LAG ae1, interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33

FCoE VLAN name and tag ID

Name—fcoe_vlanID—100

Include the FCoE VLAN on the interfaces that carry FCoE traffic on all four switches.

Disable IGMP snooping on the interfaces that belong to the FCoE VLAN on all four switches.

FIP snooping

Enable FIP snooping on Transit Switches TS1 and TS2 on the FCoE VLAN. Configure the LAG interfaces that connect to the MC-LAG switches as FCoE trusted interfaces so that they do not perform FIP snooping.

This example enables VN2VN_Port FIP snooping on the FCoE transit switch interfaces connected to the FCoE servers. The example is equally valid with VN2VF_Port FIP snooping enabled on the transit switch access ports. The method of FIP snooping you enable depends on your network configuration.

Note:

This example uses the default IEEE 802.1p trusted BA classifier, which is automatically applied to trunk mode and tagged access mode ports if you do not apply an explicitly configured classifier.

To configure CoS for FCoE traffic across an MC-LAG:

  • Use the default FCoE forwarding class and forwarding-class-to-queue mapping (do not explicitly configure the FCoE forwarding class or output queue). The default FCoE forwarding class is fcoe, and the default output queue is queue 3.

    Note:

    In Junos OS Release 12.2, traffic mapped to explicitly configured forwarding classes, even lossless forwarding classes such as fcoe, is treated as lossy (best-effort) traffic and does not receive lossless treatment. To receive lossless treatment in Release 12.2, traffic must use one of the default lossless forwarding classes (fcoe or no-loss).

    In Junos OS Release 12.3 and later, you can include the no-loss packet drop attribute in the explicit forwarding class configuration to configure a lossless forwarding class.

  • Use the default trusted BA classifier, which maps incoming packets to forwarding classes by the IEEE 802.1p code point (CoS priority) of the packet. The trusted classifier is the default classifier for interfaces in trunk and tagged-access port modes. The default trusted classifier maps incoming packets with the IEEE 802.1p code point 3 (011) to the FCoE forwarding class. If you choose to configure the BA classifier instead of using the default classifier, you must ensure that FCoE traffic is classified into forwarding classes in exactly the same way on both MC-LAG switches. Using the default classifier ensures consistent classifier configuration on the MC-LAG ports.

  • Configure a congestion notification profile that enables PFC on the FCoE code point (code point 011 in this example). The congestion notification profile configuration must be the same on both MC-LAG switches.

  • Apply the congestion notification profile to the interfaces.

  • Configure enhanced transmission selection (ETS, also known as hierarchical scheduling) on the interfaces to provide the bandwidth required for lossless FCoE transport. Configuring ETS includes configuring bandwidth scheduling for the FCoE forwarding class, a forwarding class set (priority group) that includes the FCoE forwarding class, and a traffic control profile to assign bandwidth to the forwarding class set that includes FCoE traffic.

  • Apply the ETS scheduling to the interfaces.

  • Configure the port mode, MTU, and FCoE trusted or untrusted state for each interface to support lossless FCoE transport.

In addition, this example describes how to enable FIP snooping on the Transit Switch TS1 and TS2 ports that are connected to the FCoE servers and how to disable IGMP snooping on the FCoE VLAN. To provide secure access, FIP snooping must be enabled on the FCoE access ports.

This example focuses on the CoS configuration to support lossless FCoE transport across an MC-LAG. This example does not describe how to configure the properties of MC-LAGs and LAGs, although it does show you how to configure the port characteristics required to support lossless transport and how to assign interfaces to the MC-LAG and to the LAGs.

Before you configure CoS, configure:

  • The MC-LAGs that connect Switches S1 and S2 to Switches TS1 and TS2.

  • The LAGs that connect the Transit Switches TS1 and TS2 to MC-LAG Switches S1 and S2.

  • The LAG that connects Switch S1 to Switch S2.

Configuration

To configure CoS for lossless FCoE transport across an MC-LAG, perform these tasks:

CLI Quick Configuration

To quickly configure CoS for lossless FCoE transport across an MC-LAG, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI for MC-LAG Switch S1 and MC-LAG Switch S2 at the [edit] hierarchy level. The configurations on Switches S1 and S2 are identical because the CoS configuration must be identical, and because this example uses the same ports on both switches.

Switch S1 and Switch S2

To quickly configure CoS for lossless FCoE transport across an MC-LAG, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI for Transit Switch TS1 and Transit Switch TS2 at the [edit] hierarchy level. The configurations on Switches TS1 and TS2 are identical because the CoS configuration must be identical, and because this example uses the same ports on both switches.

Switch TS1 and Switch TS2

Configuring MC-LAG Switches S1 and S2

Step-by-Step Procedure

To configure CoS resource scheduling (ETS), PFC, the FCoE VLAN, and the LAG and MC-LAG interface membership and characteristics to support lossless FCoE transport across an MC-LAG (this example uses the default fcoe forwarding class and the default classifier to map incoming FCoE traffic to the FCoE IEEE 802.1p code point 011, so you do not configure them):

  1. Configure output scheduling for the FCoE queue.

  2. Map the FCoE forwarding class to the FCoE scheduler (fcoe-sched).

  3. Configure the forwarding class set (fcoe-pg) for the FCoE traffic.

  4. Define the traffic control profile (fcoe-tcp) to use on the FCoE forwarding class set.

  5. Apply the FCoE forwarding class set and traffic control profile to the LAG and MC-LAG interfaces.

  6. Enable PFC on the FCoE priority by creating a congestion notification profile (fcoe-cnp) that applies FCoE to the IEEE 802.1 code point 011.

  7. Apply the PFC configuration to the LAG and MC-LAG interfaces.

  8. Configure the VLAN for FCoE traffic (fcoe_vlan).

  9. Disable IGMP snooping on the FCoE VLAN.

  10. Add the member interfaces to the LAG between the two MC-LAG switches.

  11. Add the member interfaces to the MC-LAG.

  12. Configure the port mode as trunk and membership in the FCoE VLAN (fcoe_vlan)for the LAG (ae0) and for the MC-LAG (ae1).

  13. Set the MTU to 2180 for the LAG and MC-LAG interfaces.

    2180 bytes is the minimum size required to handle FCoE packets because of the payload and header sizes. You can configure the MTU to a higher number of bytes if desired, but not less than 2180 bytes.

  14. Set the LAG and MC-LAG interfaces as FCoE trusted ports.

    Ports that connect to other switches should be trusted and should not perform FIP snooping.

Configuring FCoE Transit Switches TS1 and TS2

Step-by-Step Procedure

The CoS configuration on FCoE Transit Switches TS1 and TS2 is similar to the CoS configuration on MC-LAG Switches S1 and S2. However, the port configurations differ, and you must enable FIP snooping on the Switch TS1 and Switch TS2 FCoE access ports.

To configure resource scheduling (ETS), PFC, the FCoE VLAN, and the LAG interface membership and characteristics to support lossless FCoE transport across the MC-LAG (this example uses the default fcoe forwarding class and the default classifier to map incoming FCoE traffic to the FCoE IEEE 802.1p code point 011, so you do not configure them):

  1. Configure output scheduling for the FCoE queue.

  2. Map the FCoE forwarding class to the FCoE scheduler (fcoe-sched).

  3. Configure the forwarding class set (fcoe-pg) for the FCoE traffic.

  4. Define the traffic control profile (fcoe-tcp) to use on the FCoE forwarding class set.

  5. Apply the FCoE forwarding class set and traffic control profile to the LAG interface and to the FCoE access interfaces.

  6. Enable PFC on the FCoE priority by creating a congestion notification profile (fcoe-cnp) that applies FCoE to the IEEE 802.1 code point 011.

  7. Apply the PFC configuration to the LAG interface and to the FCoE access interfaces.

  8. Configure the VLAN for FCoE traffic (fcoe_vlan).

  9. Disable IGMP snooping on the FCoE VLAN.

  10. Add the member interfaces to the LAG.

  11. On the LAG (ae1), configure the port mode as trunk and membership in the FCoE VLAN (fcoe_vlan).

  12. On the FCoE access interfaces (xe-0/0/30, xe-0/0/31, xe-0/0/32, xe-0/0/33), configure the port mode as tagged-access and membership in the FCoE VLAN (fcoe_vlan).

  13. Set the MTU to 2180 for the LAG and FCoE access interfaces.

    2180 bytes is the minimum size required to handle FCoE packets because of the payload and header sizes; you can configure the MTU to a higher number of bytes if desired, but not less than 2180 bytes.

  14. Set the LAG interface as an FCoE trusted port. Ports that connect to other switches should be trusted and should not perform FIP snooping:

    Note:

    Access ports xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33 are not configured as FCoE trusted ports. The access ports remain in the default state as untrusted ports because they connect directly to FCoE devices and must perform FIP snooping to ensure network security.

  15. Enable FIP snooping on the FCoE VLAN to prevent unauthorized FCoE network access (this example uses VN2VN_Port FIP snooping; the example is equally valid if you use VN2VF_Port FIP snooping).

Results

Display the results of the CoS configuration on MC-LAG Switch S1 and on MC-LAG Switch S2 (the results on both switches are the same).

Note:

The forwarding class and classifier configurations are not shown because the show command does not display default portions of the configuration.

Display the results of the CoS configuration on FCoE Transit Switch TS1 and on FCoE Transit Switch TS2 (the results on both transit switches are the same).

Verification

To verify that the CoS components and FIP snooping have been configured and are operating properly, perform these tasks. Because this example uses the default fcoe forwarding class and the default IEEE 802.1p trusted classifier, the verification of those configurations is not shown.

Verifying That the Output Queue Schedulers Have Been Created

Purpose

Verify that the output queue scheduler for FCoE traffic has the correct bandwidth parameters and priorities, and is mapped to the correct forwarding class (output queue). Queue scheduler verification is the same on each of the four switches.

Action

List the scheduler map using the operational mode command show class-of-service scheduler-map fcoe-map:

Meaning

The show class-of-service scheduler-map fcoe-map command lists the properties of the scheduler map fcoe-map. The command output includes:

  • The name of the scheduler map (fcoe-map)

  • The name of the scheduler (fcoe-sched)

  • The forwarding classes mapped to the scheduler (fcoe)

  • The minimum guaranteed queue bandwidth (transmit rate 3000000000 bps)

  • The scheduling priority (low)

  • The maximum bandwidth in the priority group the queue can consume (shaping rate 100 percent)

  • The drop profile loss priority for each drop profile name. This example does not include drop profiles because you do not apply drop profiles to FCoE traffic.

Verifying That the Priority Group Output Scheduler (Traffic Control Profile) Has Been Created

Purpose

Verify that the traffic control profile fcoe-tcp has been created with the correct bandwidth parameters and scheduler mapping. Priority group scheduler verification is the same on each of the four switches.

Action

List the FCoE traffic control profile properties using the operational mode command show class-of-service traffic-control-profile fcoe-tcp:

Meaning

The show class-of-service traffic-control-profile fcoe-tcp command lists all of the configured traffic control profiles. For each traffic control profile, the command output includes:

  • The name of the traffic control profile (fcoe-tcp)

  • The maximum port bandwidth the priority group can consume (shaping rate 100 percent)

  • The scheduler map associated with the traffic control profile (fcoe-map)

  • The minimum guaranteed priority group port bandwidth (guaranteed rate 3000000000 in bps)

Verifying That the Forwarding Class Set (Priority Group) Has Been Created

Purpose

Verify that the FCoE priority group has been created and that the fcoe priority (forwarding class) belongs to the FCoE priority group. Forwarding class set verification is the same on each of the four switches.

Action

List the forwarding class sets using the operational mode command show class-of-service forwarding-class-set fcoe-pg:

Meaning

The show class-of-service forwarding-class-set fcoe-pg command lists all of the forwarding classes (priorities) that belong to the fcoe-pg priority group, and the internal index number of the priority group. The command output shows that the forwarding class set fcoe-pg includes the forwarding class fcoe.

Verifying That Priority-Based Flow Control Has Been Enabled

Purpose

Verify that PFC is enabled on the FCoE code point. PFC verification is the same on each of the four switches.

Action

List the FCoE congestion notification profile using the operational mode command show class-of-service congestion-notification fcoe-cnp:

Meaning

The show class-of-service congestion-notification fcoe-cnp command lists all of the IEEE 802.1p code points in the congestion notification profile that have PFC enabled. The command output shows that PFC is enabled on code point 011 (fcoe queue) for the fcoe-cnp congestion notification profile.

The command also shows the default cable length (100 meters), the default maximum receive unit (2500 bytes), and the default mapping of priorities to output queues because this example does not include configuring these options.

Verifying That the Interface Class of Service Configuration Has Been Created

Purpose

Verify that the CoS properties of the interfaces are correct. The verification output on MC-LAG Switches S1 and S2 differs from the output on FCoE Transit Switches TS1 and TS2.

Action

List the interface CoS configuration on MC-LAG Switches S1 and S2 using the operational mode command show configuration class-of-service interfaces:

List the interface CoS configuration on FCoE Transit Switches TS1 and TS2 using the operational mode command show configuration class-of-service interfaces:

Meaning

The show configuration class-of-service interfaces command lists the class of service configuration for all interfaces. For each interface, the command output includes:

  • The name of the interface (for example, ae0 or xe-0/0/30)

  • The name of the forwarding class set associated with the interface (fcoe-pg)

  • The name of the traffic control profile associated with the interface (output traffic control profile, fcoe-tcp)

  • The name of the congestion notification profile associated with the interface (fcoe-cnp)

Note:

Interfaces that are members of a LAG are not shown individually. The LAG or MC-LAG CoS configuration is applied to all interfaces that are members of the LAG or MC-LAG. For example, the interface CoS configuration output on MC-LAG Switches S1 and S2 shows the LAG CoS configuration but does not show the CoS configuration of the member interfaces separately. The interface CoS configuration output on FCoE Transit Switches TS1 and TS2 shows the LAG CoS configuration but also shows the configuration for interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33, which are not members of a LAG.

Verifying That the Interfaces Are Correctly Configured

Purpose

Verify that the LAG membership, MTU, VLAN membership, and port mode of the interfaces are correct. The verification output on MC-LAG Switches S1 and S2 differs from the output on FCoE Transit Switches TS1 and TS2.

Action

List the interface configuration on MC-LAG Switches S1 and S2 using the operational mode command show configuration interfaces:

List the interface configuration on FCoE Transit Switches TS1 and TS2 using the operational mode command show configuration interfaces:

Meaning

The show configuration interfaces command lists the configuration of each interface by interface name.

For each interface that is a member of a LAG, the command lists only the name of the LAG to which the interface belongs.

For each LAG interface and for each interface that is not a member of a LAG, the command output includes:

  • The MTU (2180)

  • The unit number of the interface (0)

  • The port mode (trunk mode for interfaces that connect two switches, tagged-access mode for interfaces that connect to FCoE hosts)

  • The name of the VLAN in which the interface is a member (fcoe_vlan)

Verifying That FIP Snooping Is Enabled on the FCoE VLAN on FCoE Transit Switches TS1 and TS2 Access Interfaces

Purpose

Verify that FIP snooping is enabled on the FCoE VLAN access interfaces. FIP snooping is enabled only on the FCoE access interfaces, so it is enabled only on FCoE Transit Switches TS1 and TS2. FIP snooping is not enabled on MC-LAG Switches S1 and S2 because FIP snooping is done at the Transit Switch TS1 and TS2 FCoE access ports.

Action

List the port security configuration on FCoE Transit Switches TS1 and TS2 using the operational mode command show configuration ethernet-switching-options secure-access-port:

Meaning

The show configuration ethernet-switching-options secure-access-port command lists port security information, including whether a port is trusted. The command output shows that:

  • LAG port ae1.0, which connects the FCoE transit switch to the MC-LAG switches, is configured as an FCoE trusted interface. FIP snooping is not performed on the member interfaces of the LAG (xe-0/0/25 and xe-0/0/26).

  • FIP snooping is enabled (examine-fip) on the FCoE VLAN (fcoe_vlan), the type of FIP snooping is VN2VN_Port FIP snooping (examine-vn2vn), and the beacon period is set to 90000 milliseconds. On Transit Switches TS1 and TS2, all interface members of the FCoE VLAN perform FIP snooping unless the interface is configured as FCoE trusted. On Transit Switches TS1 and TS2, interfaces xe-0/0/30, xe-0/0/31, xe-0/0/32, and xe-0/0/33 perform FIP snooping because they are not configured as FCoE trusted. The interface members of LAG ae1 (xe-0/0/25 and xe-0/0/26) do not perform FIP snooping because the LAG is configured as FCoE trusted.

Verifying That the FIP Snooping Mode Is Correct on FCoE Transit Switches TS1 and TS2

Purpose

Verify that the FIP snooping mode is correct on the FCoE VLAN. FIP snooping is enabled only on the FCoE access interfaces, so it is enabled only on FCoE Transit Switches TS1 and TS2. FIP snooping is not enabled on MC-LAG Switches S1 and S2 because FIP snooping is done at the Transit Switch TS1 and TS2 FCoE access ports.

Action

List the FIP snooping configuration on FCoE Transit Switches TS1 and TS2 using the operational mode command show fip snooping brief:

Note:

The output has been truncated to show only the relevant information.

Meaning

The show fip snooping brief command lists FIP snooping information, including the FIP snooping VLAN and the FIP snooping mode. The command output shows that:

  • The VLAN on which FIP snooping is enabled is fcoe_vlan

  • The FIP snooping mode is VN2VN_Port FIP snooping (VN2VN Snooping)

Verifying That IGMP Snooping Is Disabled on the FCoE VLAN

Purpose

Verify that IGMP snooping is disabled on the FCoE VLAN on all four switches.

Action

List the IGMP snooping protocol information on each of the four switches using the show configuration protocols igmp-snooping command:

Meaning

The show configuration protocols igmp-snooping command lists the IGMP snooping configuration for the VLANs configured on the switch. The command output shows that IGMP snooping is disabled on the FCoE VLAN (fcoe_vlan).

Example: EVPN-MPLS Interworking With an MC-LAG Topology

This example shows how to use Ethernet VPN (EVPN) to extend a multichassis link aggregation (MC-LAG) network over an MPLS network to a data center network or geographically distributed campus network.

EVPN-MPLS interworking is supported with an MC-LAG topology in which two MX Series routers, two EX9200 switches, or a mix of the two Juniper Networks devices function as MC-LAG peers, which use the Inter-Chassis Control Protocol (ICCP) and an interchassis link (ICL) to connect and maintain the topology. The MC-LAG peers are connected to a provider edge (PE) device in an MPLS network. The PE device can be either an MX Series router or an EX9200 switch.

This example shows how to configure the MC-LAG peers and PE device in the MPLS network to interwork with each other.

Requirements

This example uses the following hardware and software components:

  • Three EX9200 switches:

    • PE1 and PE2, which both function as MC-LAG peers in the MC-LAG topology and EVPN BGP peers in the EVPN-MPLS overlay network.

    • PE3, which functions as an EVPN BGP peer in the EVPN-MPLS overlay network.

  • The EX9200 switches are running Junos OS Release 17.4R1 or later software.

Note:

Although the MC-LAG topology includes two customer edge (CE) devices, this example focuses on the configuration of the PE1, PE2, and PE3.

Overview and Topology

Figure 7 shows an MC-LAG topology with provider edge devices PE1 and PE2 that are configured as MC-LAG peers. The MC-LAG peers exchange control information over an ICCP link and data traffic over an ICL. In this example, the ICL is an aggregated Ethernet interface that is comprised of two interfaces.

Figure 7: EVPN-MPLS Interworking With an MC-LAG TopologyEVPN-MPLS Interworking With an MC-LAG Topology

The topology in Figure 7 also includes CE devices CE1 and CE2, which are both multihomed to each PE device. The links between CE1 and the two PE devices are bundled as an aggregated Ethernet interface on which MC-LAG in active-active mode is configured.

The topology in Figure 7 also includes PE3 at the edge of an MPLS network. PE3 functions as the gateway between the MC-LAG network and either a data center or a geographically distributed campus network. PE1, PE2, and PE3 run EVPN, which enables hosts in the MC-LAG network to communicate with hosts in the data center or other campus network by way of an intervening MPLS network.

From the perspective of the EVPN-MPLS interworking feature, PE3 functions solely as an EVPN BGP peer, and PE1 and PE2 in the MC-LAG topology have dual roles:

  • MC-LAG peers in the MC-LAG network.

  • EVPN BGP peers in the EVPN-MPLS network.

Because of the dual roles, PE1 and PE2 are configured with MC-LAG, EVPN, BGP, and MPLS attributes.

Table 4 outlines key MC-LAG and EVPN (BGP and MPLS) attributes configured on PE1, PE2, and PE3.

Table 4: Key MC-LAG and EVPN (BGP and MPLS) Attributes Configured on PE1, PE2, and PE3

Key Attributes

PE1

PE2

PE3

MC-LAG Attributes

Interfaces

ICL: aggregated Ethernet interface ae1, which is comprised of xe-2/1/1 and xe-2/1/2

ICCP: xe-2/1/0

ICL: aggregated Ethernet interface ae1, which is comprised of xe-2/1/1 and xe-2/1/2

ICCP: xe-2/1/0

Not applicable

EVPN-MPLS

Interfaces

Connection to PE3: xe-2/0/0

Connection to PE2: xe-2/0/2

Connection to PE3: xe-2/0/2

Connection to PE1: xe-2/0/0

Connection to PE1: xe-2/0/2

Connection to PE2: xe-2/0/3

IP addresses

BGP peer address: 198.51.100.1

BGP peer address: 198.51.100.2

BGP peer address: 198.51.100.3

Autonomous system

65000

65000

65000

Virtual switch routing instances

evpn1, evpn2, evpn3

evpn1, evpn2, evpn3

evpn1, evpn2, evpn3

Note the following about the EVPN-MPLS interworking feature and its configuration:

  • You must configure Ethernet segment identifiers (ESIs) on the dual-homed interfaces in the MC-LAG topology. The ESIs enable EVPN to identify the dual-homed interfaces.

  • The only type of routing instance that is supported is the virtual switch instance (set routing-instances name instance-type virtual-switch).

  • On the MC-LAG peers, you must include the bgp-peer configuration statement in the [edit routing-instances name protocols evpn mclag] hierarchy level. This configuration statement enables the interworking of EVPN-MPLS with MC-LAG on the MC-LAG peers.

  • Address Resolution Protocol (ARP) suppression is not supported.

PE1 and PE2 Configuration

To configure PE1 and PE2, perform these tasks:

CLI Quick Configuration

PE1: MC-LAG Configuration

PE1: EVPN-MPLS Configuration

PE2: MC-LAG Configuration

PE2: EVPN-MPLS Configuration

PE1: Configuring MC-LAG

Step-by-Step Procedure
  1. Set the number of aggregated Ethernet interfaces on PE1.

  2. Configure aggregated Ethernet interface ae0 on interface xe-2/0/1, and configure LACP and MC-LAG on ae0. Divide aggregated Ethernet interface ae0 into three logical interfaces (ae0.1, ae0.2, and ae0.3). For each logical interface, specify an ESI, place the logical interface is in MC-LAG active-active mode, and map the logical interface to a VLAN.

  3. Configure physical interface xe-2/0/6, and divide it into three logical interfaces (xe-2/0/6.1, xe-2/0/6.2, and xe-2/0/6.3). Map each logical interface to a VLAN.

  4. Configure physical interface xe-2/1/0 as a Layer 3 interface, on which you configure ICCP. Specify the interface with the IP address of 203.0.113.2 on PE2 as the ICCP peer to PE1.

  5. Configure aggregated Ethernet interface ae1 on interfaces xe-2/1/1 and xe-2/1/2, and configure LACP on ae1. Divide aggregated Ethernet interface ae1 into three logical interfaces (ae1.1, ae1.2, and ae1.3), and map each logical interface to a VLAN. Specify ae1 as the multichassis protection link between PE1 and PE2.

PE1: Configuring EVPN-MPLS

Step-by-Step Procedure
  1. Configure the loopback interface, and the interfaces connected to the other PE devices.

  2. Configure IRB interfaces irb.1, irb.2, and irb.3.

  3. Assign a router ID and the autonomous system in which PE1, PE2, and PE3 reside.

  4. Enable per-packet load-balancing for EVPN routes when EVPN multihoming active-active mode is used.

  5. Enable MPLS on interfaces xe-2/0/0.0 and xe-2/0/2.0.

  6. Configure an IBGP overlay that includes PE1, PE2, and PE3.

  7. Configure OSPF as the internal routing protocol for EVPN by specifying an area ID and interfaces on which EVPN-MPLS is enabled.

  8. Configure the Label Distribution Protocol (LDP) on the loopback interface and the interfaces on which EVPN-MPLS is enabled.

  9. Configure virtual switch routing instances for VLAN v1, which is assigned VLAN IDs of 1, 2, and 3, and include the interfaces and other entities associated with the VLAN.

PE2: Configuring MC-LAG

Step-by-Step Procedure
  1. Set the number of aggregated Ethernet interfaces on PE2.

  2. Configure aggregated Ethernet interface ae0 on interface xe-2/0/1, and configure LACP and MC-LAG on ae0. Divide aggregated Ethernet interface ae0 into three logical interfaces (ae0.1, ae0.2, and ae0.3). For each logical interface, specify an ESI, place the logical interface is in MC-LAG active-active mode, and map the logical interface to a VLAN.

  3. Configure physical interface xe-2/0/6, and divide it into three logical interfaces (xe-2/0/6.1, xe-2/0/6.2, and xe-2/0/6.3). Map each logical interface to a VLAN.

  4. Configure physical interface xe-2/1/0 as a Layer 3 interface, on which you configure ICCP. Specify the interface with the IP address of 203.0.113.1 on PE1 as the ICCP peer to PE2.

  5. Configure aggregated Ethernet interface ae1 on interfaces xe-2/1/1 and xe-2/1/2, and configure LACP on ae1. Divide aggregated Ethernet interface ae1 into three logical interfaces (ae1.1, ae1.2, and ae1.3), and map each logical interface to a VLAN. Specify ae1 as the multichassis protection link between PE1 and PE2.

PE2: Configuring EVPN-MPLS

Step-by-Step Procedure
  1. Configure the loopback interface, and the interfaces connected to the other PE devices.

  2. Configure IRB interfaces irb.1, irb.2, and irb.3.

  3. Assign a router ID and the autonomous system in which PE1, PE2, and PE3 reside.

  4. Enable per-packet load-balancing for EVPN routes when EVPN multihoming active-active mode is used.

  5. Enable MPLS on interfaces xe-2/0/0.0 and xe-2/0/2.0.

  6. Configure an IBGP overlay that includes PE1, PE2, and PE3.

  7. Configure OSPF as the internal routing protocol for EVPN by specifying an area ID and interfaces on which EVPN-MPLS is enabled.

  8. Configure the Label Distribution Protocol (LDP) on the loopback interface and the interfaces on which EVPN-MPLS is enabled.

  9. Configure virtual switch routing instances for VLAN v1, which is assigned VLAN IDs of 1, 2, and 3, and include the interfaces and other entities associated with the VLAN.

PE3 Configuration

CLI Quick Configuration

PE3: EVPN-MPLS Configuration

PE3: Configuring EVPN-MPLS

Step-by-Step Procedure
  1. Configure the loopback interface, and the interfaces connected to the other PE devices.

  2. Configure interface xe-2/0/6, which is connected to the host.

  3. Configure IRB interfaces irb.1, irb.2, and irb.3.

  4. Assign a router ID and the autonomous system in which PE1, PE2, and PE3 reside.

  5. Enable per-packet load-balancing for EVPN routes when EVPN multihoming active-active mode is used.

  6. Enable MPLS on interfaces xe-2/0/2.0 and xe-2/0/3.0.

  7. Configure an IBGP overlay that includes PE1, PE2, and PE3.

  8. Configure OSPF as the internal routing protocol for EVPN by specifying an area ID and interfaces on which EVPN-MPLS is enabled.

  9. Configure the LDP on the loopback interface and the interfaces on which EVPN-MPLS is enabled.

  10. Configure virtual switch routing instances for VLAN v1, which is assigned VLAN IDs of 1, 2, and 3, and include the interfaces and other entities associated with the VLAN.