Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring High Availability for the Midsize Enterprise Campus

 

This example details the steps required on the devices in access, aggregation, core, and edge layers to configure them to meet the high availability goals described in Understanding the Design of the Midsize Enterprise Campus Solution.

Requirements

Table 1 shows the hardware and software requirements for this example. Table 2 shows the scaling and performance targets used for this example.

Table 1: Hardware and Software Requirements

Hardware

Device Name

Software

MX240

cs-edge-r01, cs-edge-r02

13.2 R2.4

SRX650

cs-edge-fw-01, cs-edge-fw02

12.1 X44-D39.4

EX9200/EX9250

cs-core-sw01, cs-core-sw02

13.2 R3.7

EX4600

cs-agg-01

12.3 R3.4

EX2300

cs-2300-ab5

12.3 R3.4

EX3400

cs-3400-ab4

12.3 R3.4

EX4300

cs-4300-ab1

12.3 R3.4

EX4300

cs-4300-ab2, cs-4300-ab3

13.2 X51-D21.1

Table 2: Node Features and Performance/Scalability

Node

Features

Performance/Scalability Target Value

Edge (MX240, SRX650)

MC-LAG, OSPF, BGP, IRB

3k IPv4

Core (EX9200/9250)

VLANs, MC-LAG, LAG, IGMP snooping, OSPF, PIM-SM, IGMP, DHCP relay, IRB

3k IPv4 routes

128k MAC table entries

16k ARP entries

Aggregation (EX4600)

VLANs, LAG, IGMP snooping, OSPF, PIM-SM, IGMP, DHCP relay, RVI

3k IPv4 routes

5 IGMP groups

Access (EX2300, EX3400, EX4300)

VLANs, LAG, 802.1X, IGMP snooping, DHCP snooping, ARP inspection, IP source guard

55k MAC table entries

13k 8021.x users

5 IGMP groups

The configuration procedures that follow assume that all physical cabling has been completed and that the devices have been initially configured.

Overview and Topology

Figure 1 shows the topology used for this example.

Figure 1: High Availability Topology
High Availability Topology

In this topology, all access switches in location A are in a Virtual Chassis configuration. Link aggregation is configured on the uplink ports to the EX9200/EX9250 switches, giving each Virtual Chassis a physical connection to each EX9200/EX9250 switch. Similarly, the access switches in location B are in a Virtual Chassis configuration and have link aggregation configured on the uplink ports to the EX4600 Virtual Chassis, giving each access Virtual Chassis a physical link to each member of the EX4600 Virtual Chassis.

For node redundancy in the core and edge layers:

  • The EX9200/EX9250 switches are in an active/active MC-LAG configuration. MC-LAG interfaces ae1, ae2, and ae3 connect to the access switches in location A, and MC-LAG interfaces ae11 and ae12 connect to the SRX650 gateways.

  • The SRX650 gateways are in an active/standby chassis cluster configuration, with redundant Ethernet interfaces reth0 and reth1 connecting to the EX9200/EX9250 core switches and the MX240 edge routers.

  • The MX240 routers are in an active/active MC-LAG configuration, with MC-LAG interfaces ae1 and ae3 connecting to the SRX650 gateways.

Configuring the Access Switches for High Availability

This section provides step-by-step procedures for configuring the access switches in the access layer for high availability. It uses configuring the cs-4300-ab1 Virtual Chassis as an example—you can use the same basic procedures to configure the other Virtual Chassis in the access layer.

To configure the access switches for high availability:

Configure the Virtual Chassis

Step-by-Step Procedure

To configure the Virtual Chassis:

  • Define the members of the Virtual Chassis and their roles.

Configure the LAG Interface Towards the Core or Aggregation Layer

Step-by-Step Procedure

The following procedure shows how to configure ae1 on cs-4300-ab1. You can use the same procedure for the LAGs on the other switches, substituting the information shown in Table 3.

Table 3: LAG Interfaces in the Access Layer

Virtual Chassis

LAG Name

Description String

Member Interfaces

cs-4300-ab1

ae1

“MCLAG towards core-sw1 and core-sw2”

xe-1/1/0, xe-2/1/0, xe-3/1/0, xe-4/1/0

cs-4300-ab2

ae2

“MCLAG towards core-sw1 and core-sw2”

xe-0/2/0, xe-1/2/0, xe-2/2/0, xe-4/2/0

cs-4300-ab3

ae3

“MCLAG towards core-sw1 and core-sw2”

xe-0/2/0, xe-3/2/0

cs-3400-ab4

ae4

“MCLAG towards cs-agg”

xe-0/1/0, xe-1/1/0

cs-2300-ab5

ae5

“MCLAG towards cs-agg”

ge-0/0/23, ge-0/1/0, ge-1/0/23, ge-1/1/0

To configure ae1 on cs-4300-ab1:

  1. Specify the number of LAG interfaces on the device.
  2. Configure the LAG settings for ae1.
  3. Specify the members of the LAG.
  4. Configure the LAG interface as a trunk interface with membership in all VLANs.

    The configuration statements used on an EX4300 switch differ from the statements used on the other EX Series switches. Examples of both configurations are shown.

    On EX2300, EX3400, and EX4300 switches, enter:

    On EX4300 switches, enter:

Configure the High Availability Software

Step-by-Step Procedure

To enable graceful Routing Engine switchover (GRES), nonstop active routing (NSR), and nonstop bridging (NSB):

  • Enter the following configuration statements:

    On EX2300, EX3400, and EX4300 switches, enter:

    On EX4300 switches, enter:

Configuring the Aggregation Switches for High Availability

In location B, two EX4600 switches in a Virtual Chassis configuration function as the aggregation switch. For link redundancy, LAG interfaces ae4 and ae5 connect the aggregation switch to the access switches cs-3400-ab4 and cs-2300-ab5, respectively.

To configure the aggregation switches for high availability:

Configure the EX4600 Virtual Chassis

Step-by-Step Procedure

To configure the Virtual Chassis:

  • Enter the following commands:

Configure the LAG Interfaces Towards the Access Layer

Step-by-Step Procedure

To configure the LAG interfaces:

  1. Specify the number of LAG interfaces on the device.
  2. Configure ae4.
  3. Configure ae5.

Configure the High Availability Software

Step-by-Step Procedure

To enable graceful Routing Engine switchover (GRES), nonstop active routing (NSR), and nonstop bridging (NSB):

  • Enter the following configuration statements:

Configuring the Core Switches for High Availability

The section provides the procedures for configuring the core switches in an active/active MC-LAG configuration.

To configure the core switches for high availability:

Configure the Number of Aggregated Ethernet Interfaces and Switch Service ID

Step-by-Step Procedure

This procedure configures two global settings for the switch:

  • Number of aggregated Ethernet Interfaces—You must specify the number of aggregated Ethernet interfaces that will be configured on the device.

  • Service ID—You must configure a service ID when the MC-LAG logical interfaces are part of a bridge domain, as they are in this example. The service ID is used to synchronize applications such as IGMP, ARP, and MAC learning across MC-LAG members.

  1. Specify the number of aggregated Ethernet interfaces to be created.
  2. Specify the switch service ID.

Configure the Inter-Chassis Control Protocol (ICCP) and ICCP Link

Step-by-Step Procedure

ICCP is a control plane protocol for MC-LAG. It uses TCP as a transport protocol and Bidirectional Forwarding Detection (BFD) for fast convergence. ICCP:

  • Synchronizes configurations and operational states between the two MC-LAG peers

  • Synchronizes MAC address and ARP entries learned from one MC-LAG node and shares them with the other peer

In the testing for this network configuration example, we achieved quicker convergence after a Routing Engine switchover by configuring a 3-second BFD timer for ICCP.

To configure ICCP and the ICCP link:

  1. Specify the members that belong to interface ae0, which is used for the ICCP link.

    On both cs-core-sw1 and cs-core-sw2, enter:

  2. Configure ae0 as a Layer 3 link.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

  3. Configure ICCP, using the loopback addresses of cs-core-sw1 (172.16.32.5) and cs-core-sw2 (172.16.32.6) as the local IP addresses.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

    Together, the liveness-detection statements result in a BFD timer of 3 seconds (1.5 seconds * 2 multiplier).

Configure the Interchassis Link (ICL)

Step-by-Step Procedure

The ICL is a special Layer 2 link between peers in an active/active MC-LAG configuration. It provides redundancy when an active link to an MC-LAG node fails by permitting the nodes to forward traffic between them.

We recommend that you configure the ICL members with a hold-time down value that is higher than the configured BFD timer to prevent the ICL from being advertised as being down before the ICCP link is down. If the ICL goes down before the ICCP link, this causes a flap of the MC-LAG interface on the status-control standby node, which leads to a delay in convergence. This example uses a hold-time down value of 4 seconds (4000 ms), based on the ICCP BFD timer of 3 seconds. These values result in zero loss convergence during recovery of failed devices.

To configure the ICL:

  1. Configure ICL members with a hold-time value higher than the configured BFD timer.

    On both cs-core-sw1 and cs-core-sw2, enter:

    Note

    If you configure a hold-time down value, you must also configure a hold-time up value. We have chosen a minimal value for hold-time up in this configuration.

  2. Configure ae29, which is the LAG for the ICL.

    On both cs-core-sw1 and cs-core-sw2, enter:

Configure the MC-LAG Links to the Access Layer

Step-by-Step Procedure

The core switches establish an MC-LAG link to each of the Virtual Chassis in the access layer. To create the MC-LAG link, you create an aggregated Ethernet interface, enable LACP on the interface, and configure the MC-LAG options under the mc-ae statement.

Table 4 describes the mc-ae options.

Table 4: mc-ae Statement Options

mc-ae Option

Description

mc-ae-id

Specifies which link aggregation group the aggregated Ethernet interface belongs to. In this solution, the mc-ae-id used matches the number of the aggregated Ethernet interface—that is, ae1 has a mc-ae-id of 1, ae2 has a mc-ae-id of 2, and ae3 has a mc-ae-id of 3.

redundancy-group

Used by ICCP to associate multiple chassis that perform similar redundancy functions and to establish a communication channel so that applications on peering chassis can send messages to each other. The MC-LAG interfaces on cs-core-sw1 and cs-core-sw2 are configured with the same redundancy group number, redundancy-group 1.

init-delay-time

Specifies the number of seconds by which to delay bringing the MC-LAG interface back to the up state when MC-LAG peer is rebooted. By delaying the bring up of the interface until after protocol convergence, you can prevent packet loss during the recovery of failed links and devices. In this solution, we found that a delay set to 520 seconds provided the quickest convergence after core switch failover. Configure this value for all MC-LAG interfaces on the core switches.

chassis-id

Used by LACP for calculating the port number of the MC-LAG physical member links. cs-core-sw1 uses chassis-id 0 to identify its MC-LAG interfaces. cs-core-sw2 uses chassis-id 1 to identify its MC-LAG interfaces.

mode

Indicates whether an MC-LAG is in active/standby mode or active/active mode. Chassis that are in the same group must be in the same mode. In this solution, the mode is active/active.

status-control

Specifies whether this node becomes active or goes into standby mode when an ICL failure occurs. Must be active on one node and standby on the other node.

events iccp-peer-down force-icl-down

Forces ICL down if the peer of this node goes down.

events iccp-peer-down prefer-status-control-active

Allows the LACP system ID to be retained during a reboot, which provides better convergence after a failover. Note that if you configure both nodes as prefer-status-control-active, as this configuration example shows, you must also configure ICCP peering using the peer’s loopback address to make sure the ICCP session does not go down due to physical link failure.

The following procedure shows how to configure the ae1 MC-LAG link to cs-4300-ab1. You can use the same procedure to configure the links to the other access switches, substituting the values shown in Table 5.

Table 5: Parameters for MC-LAGs to Access Switches

LAG

LAG Client

Member Interfaces

lacp system-id

lacp admin-key

mc-ae mc-ae-id

ae1

cs-4300-ab1

xe-0/0/0

xe-1/0/0

00:ae:01:00:00:01

1

1

ae2

cs-4300-ab2

xe-0/0/1

xe-1/0/1

00:ae:02:00:00:01

2

2

ae3

cs-4300-ab3

xe-0/0/2

00:ae:03:00:00:01

3

3

To configure the ae1 MC-LAG link to cs-4300-ab1:

  1. Specify the members to be included within the aggregated Ethernet interface ae1.

    On both cs-core-sw1 and cs-core-sw2, enter:

  2. Configure the LACP parameters on the aggregated Ethernet interface.

    On both cs-core-sw1 and cs-core-sw2, enter:

  3. Configure the mc-ae interface parameters.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

  4. Configure ae1 as a trunk port, with membership in all VLANS.

    On both cs-core-sw1 and cs-core-sw2, enter:

Configure the MC-LAG Links to the Edge Firewalls

Step-by-Step Procedure

The following procedure shows how to configure the ae11 MC-LAG link to cs-edge-fw01 on cs-core-sw01 and cs-core-sw02. You can use the same procedure to configure the ae12 MC-LAG link to cs-edge-fw02 on both switches, substituting the values shown in Table 6.

Table 6: Parameters for Edge Router MC-LAG Interfaces Connecting to Edge Firewalls

LAG

LAG Client

Member Interface

lacp system-id

lacp admin-key

mc-ae mc-ae-id

ae11

reth 0 on cs-edge-fw01

ge-2/0/0

00:ae:11:00:00:01

11

11

ae12

reth 0 on cs-edge-fw02

ge-2/0/1

00:ae:12:00:00:01

12

12

To configure the ae11 MC-LAG link to the core switches:

  1. Specify the interface to be included within the aggregated Ethernet interface ae11.

    On both cs-core-sw1 and cs-core-sw2, enter:

  2. Configure the LACP parameters on the aggregated Ethernet interface.

    On both cs-core-sw1 and cs-core-sw2, enter:

  3. Configure the mc-ae interface parameters.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

  4. Configure ae11.0 as a trunk interface and as a member of the Firewall-trust VLAN.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

Configure the Bridge Domain on the MC-LAG Interfaces to the Edge Firewalls

Step-by-Step Procedure

The active node in the SRX chassis cluster uses gratuitous ARP to advertise to connecting devices that it is the next-hop gateway. This requires that the interfaces between the connecting devices and the SRX chassis cluster be in the same bridge domain.

Table 7 summarizes the configuration of this bridge domain.

Table 7: VLAN 600 Configuration

VLAN Name

VLAN ID

IRB Name

IP Address Information

Mask

cs-core-sw01 Address

cs-core-sw01 Address

Virtual IP Address

Firewall-Trust

600

irb.600

/29

172.16.33.3

172.16.33.2

172.16.33.1

To configure the required bridge domain:

  1. Create the bridge domain.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

    Note

    The static-mac option on VLAN 600 (Firewall-trust) prevents traffic arriving from the SRX chassis cluster from flooding the VLAN.

    The SRX chassis cluster sends traffic to both core switches using the IRB 600 MAC address for routing the packet. The IRB 600 MAC addresses on cs-core-sw1 and cs-core-sw2 are different. Because the reth1 interface on the chassis cluster is a single LAG, the reth0 LAG address hashing results in a packet destined to the cs-core-sw1 MAC address being sent to cs-core-sw2. In an MC-LAG configuration, MAC address learning does not occur on the ICL link, and, as a result, cs-core-sw2 floods the packet on VLAN 600. To avoid flooding on VLAN 600, specify the MAC address for cs-core-sw1 in the static-mac option on cs-core-sw2 and vice versa. When a packet destined to cs-core-sw1 arrives at cs-core-sw2, cs-core-sw2 sends the packet to cs-core-sw1 using the static MAC address.

  2. Configure an IRB interface on the VLAN and enable VRRP on the IRB interface.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

Configure Hold-Up Timers on Other Interfaces

Step-by-Step Procedure

In addition to the MC-LAG interfaces, the core switches have other Layer 2 and Layer 3 interfaces, such as the Layer 3 interface connecting to the aggregation switch in location B. To avoid having these interfaces come up before the MC-LAG synchronization completes after a failover, you can configure a hold-up timer on the interfaces. The interfaces will not come up until the timer expires.

In our testing, we found that a hold-up timer of 467 seconds gave the best convergence results.

To configure the hold-up timer on an interface (in this case, the interface connecting to aggregation switch):

  • On both cs-core-sw1 and cs-core-sw2, enter the following configuration statements:

Configure VRRP on IRB Interfaces

Step-by-Step Procedure

VRRP is used in conjunction with MC-LAG on the core switches. VRRP permits redundant routers to appear as a single virtual router to the other devices. In a VRRP implementation, each VRRP peer shares a common virtual IP address and virtual MAC address in addition to its unique physical IP address and MAC address. Thus, each IRB configured on the core switches must have a virtual IP address.

To configure VRRP on an IRB—in this case, the IRB that is the Layer 3 interface for the eng1_data_wired VLAN:

  1. Configure the eng1_data_wired VLAN and the IRB as the routing interface for the VLAN.

    On both cs-core-sw1 and cs-core-sw2, enter:

  2. Configure the IRB and enable VRRP on it.

    On cs-core-sw1, enter:

    On cs-core-sw2, enter:

Configure the High Availability Software

Step-by-Step Procedure

To enable graceful Routing Engine switchover (GRES), nonstop active routing (NSR), and nonstop bridging (NSB):

  • On both cs-core-sw1 and cs-core-sw2, enter the following configuration statements:

Configuring the Edge Firewalls for High Availability

The section provides the procedures for configuring the edge firewalls in a chassis cluster configuration and for configuring the redundant Ethernet interfaces.

To configure the edge firewalls for high availability:

Enable Chassis Cluster Mode

Step-by-Step Procedure

The command for enabling chassis cluster mode is an operational command, not a configuration statement, and must be executed on each member. The command causes the cluster member to reboot.

When you enable chassis cluster mode, you specify a cluster ID for the cluster. Because this network configuration example has only a single cluster, it uses cluster ID 1 for the cluster, with cs-edge-fw01 configured as node 0 and cs-edge-fw02 configured as node 1.

After you enable chassis clustering, the cluster members share a single, common configuration. All subsequent configuration steps can be done from the primary cluster member (node 0).

To enable chassis clustering on each member:

  1. On cs-edge-fw01, enter the following operational command:
  2. On cs-edge-fw02, enter the following operational command:

    After the chassis members finish rebooting, the slot numbering on node 1 is changed so that numbering begins with slot 9 instead of slot 0. In addition, the interfaces shown in Table 8 are automatically mapped to the fxp0 and fxp1 interfaces.

    Table 8: Mapping of Interfaces After Chassis Clustering Is Enabled

    Interface on Node 0

    Interface on Node 1

    Mapped to

    Purpose

    ge-0/0/0

    ge-9/0/0

    fxp0

    Out-of-band management

    ge-0/0/1

    ge-9/0/1

    fxp1

    Chassis cluster control link

Configure the Chassis Cluster Data Fabric

Step-by-Step Procedure

After the chassis cluster has formed, you must configure the fabric ports for the cluster. These ports are used to pass real-time objects (RTOs) in active/passive mode. RTOs are messages that the cluster members use to synchronize information with each other.

To configure the data fabric, you must configure two fabric interfaces (one on each chassis) as shown:

  1. Configure the fabric link for cs-edge-fw01.
  2. Configure the fabric link for cs-edge-fw02.

Configure Chassis Clustering Groups

Step-by-Step Procedure

Although the chassis cluster configuration is held within a single common configuration, some elements of the configuration need to be applied to a specific member. Examples include the host name and the out-of-band management interface.

To apply the configuration to a specific member, you use the node-specific configuration method called groups.

To configure chassis clustering groups:

  1. Configure node-specific information for cs-edge-fw01 (node 0):
  2. Configure node-specific information for cs-edge-fw02 (node 1):
  3. Configure apply groups.

    This command uses the node variable to define how the groups are applied to the nodes (each node will recognize its number and accept the configuration accordingly).

Configure Chassis Cluster Redundancy Groups

Step-by-Step Procedure

The next step in configuring chassis clustering is to configure redundancy groups. Redundancy group 0 is always for the control plane, while redundancy group 1+ is always for the data plane ports. Because active/backup mode allows only one chassis member to be active at a time, you define only redundancy groups 0 and 1.

You also need to define which device has priority for the control plane, as well as which device has priority for the data plane. Although the control plane can be active on a different chassis than the data plane in active/passive clustering, many administrators prefer having both the control plane and data plane active on the same chassis member. This example gives node 0 priority for both the control plane and data plane.

To configure chassis cluster redundancy groups:

  • Enter the following commands:

Configure the Redundant Ethernet Interfaces

Step-by-Step Procedure

The redundant Ethernet interfaces connect the SRX chassis cluster to the core switches and edge routers. They allow the backup chassis member to take over the connections seamlessly in the event of a data plane failover. To configure the redundant Ethernet interfaces, you define which interfaces belong to the redundant Ethernet interface, define which redundancy group the redundant Ethernet interface belongs to (in an active/passive cluster, the interface always belongs to redundancy group 1), and define the redundant Ethernet interface information, such as the IP address of the interface.

To configure redundant Ethernet interfaces on the chassis cluster:

  1. Specify the number of redundant Ethernet interfaces to be configured.

    This is similar to how you configure the number of aggregated Ethernet interfaces on a switch.

  2. Configure redundant Ethernet interface reth0 toward the core switches.
  3. Configure the member links for reth0.
  4. Configure redundant Ethernet interface reth1 toward the edge routers.
  5. Configure the member links for reth1.

Configure the Bridge Domains

Step-by-Step Procedure

As previously described, the active node uses gratuitous Address Resolution Protocol (ARP) to advertise to the connecting devices that it is the next-hop gateway. This requires that the redundant Ethernet interface members and their connecting interfaces on the other devices belong to the same bridge domain.

To configure the bridge domains for reth0 and reth1:

  • Enter the following commands:

Configuring the Edge Routers for High Availability

The section provides the procedures for configuring the edge routers in an MC-LAG configuration and for configuring the high availability software.

To configure the edge routers for high availability:

Configure the Number of Aggregated Ethernet Interfaces and the Service ID

Step-by-Step Procedure

This procedure configures two global settings for the router:

  • Number of aggregated Ethernet interfaces—You must specify the number of aggregated Ethernet interfaces that will be configured on the device.

  • Service ID—You must configure a service ID when the MC-LAG logical interfaces are part of a bridge domain, as they are in this example. The service ID is used to synchronize applications such as IGMP, ARP, and MAC learning across MC-LAG members.

On both cs-edge-r01 and cs-edge-r02:

  1. Specify the number of aggregated Ethernet interfaces to be created.
  2. Specify the switch service ID.

Configure the Inter-Chassis Control Protocol (ICCP) and ICCP Link

Step-by-Step Procedure

To configure ICCP and the ICCP link:

  1. Specify the member interface that belongs to interface ae0, which will be used for the ICCP link.

    On both cs-edge-r01 and cs-edge-r02, enter:

  2. Configure ae0 as a Layer 3 link for ICCP.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

  3. Configure ICCP, using the loopback addresses of cs-edge-r01 (172.16.32.33) and cs-edge-r02 (172.16.32.34) as the local IP addresses.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

    Note

    The BFD timer is configured to be 1.5 sec, which provides faster convergence in this network configuration.

Configure the Interchassis Link (ICL) on the Edge Routers

Step-by-Step Procedure

To configure the ICL link on the edge routers:

  1. Configure the ICL member link.

    On both cs-edge-r01 and cs-edge-r02, enter:

    For faster convergence, the hold-down timer is configured to be greater than the ICCP BFD timer, which is set to 1.5 seconds.

  2. Configure ae4, which will be used for the ICL link.

    On both cs-edge-r01 and cs-edge-r02, enter:

Configure the MC-LAG Links from the Routers to the Firewalls

Step-by-Step Procedure

The edge routers establish MC-LAG links to each of the SRX Series gateways in the chassis cluster. To create the MC-LAG link, you create an aggregated Ethernet interface, enable LACP on the interface, and configure the MC-LAG options under the mc-ae option. Table 4 describes the MC-LAG options.

The following procedure shows how to configure the ae1 MC-LAG link to edge-fw-1. You can use the same procedure to configure the ae3 link to edge-fw-2, substituting the values shown in Table 9.

Table 9: Parameters for MC-LAG Interfaces from Routers to Firewalls

LAG

LAG Client

Description String

Member Interface

lacp system-id

lacp admin-key

mc-ae mc-ae-id

ae1

reth 1 on cs-edge-fw01

“To-Firewall-reth1”

ge-1/0/0

00:ae:01:00:00:01

1

1

ae3

reth 1 on cs-edge-fw02

“To-Firewall-Standby”

ge-1/0/1

00:ae:03:00:00:01

3

3

To configure the MC-LAG interfaces to the firewalls:

  1. Specify the members to be included within the aggregated Ethernet interface ae1.

    On both cs-edge-r01 and cs-edge-r02, enter:

  2. Configure flexible VLAN tagging and the LACP parameters on the aggregated Ethernet interface.

    On cs-edge-r01 and cs-edge-r02, enter:

  3. Configure the mc-ae interface parameters.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

  4. Configure a logical interface on ae1, with membership in VLAN 601.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

Configure the Bridge Domain on the MC-LAG Interfaces to the Firewalls

Step-by-Step Procedure

The active node in the SRX chassis cluster uses gratuitous ARP to advertise to connecting devices that it is the next-hop gateway. This requires that the interfaces between the connecting devices and the SRX chassis cluster be in the same bridge domain.

Table 10 summarizes the configuration of this bridge domain.

Table 10: Bridge Domain 601 Configuration

Name

ID

IRB Name

IP Address Information

Mask

cs-edge-r01 Address

cs-edge-r02 Address

Virtual IP Address

bd1

601

irb.601

/29

172.16.33.10

172.16.33.11

172.16.33.9

To configure the required bridge domain:

  1. Create the bridge domain.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

    Note

    The static-mac option on bridge domain 601 (bd1) prevents traffic arriving from the SRX chassis cluster from flooding the VLAN.

    The SRX chassis cluster sends traffic to both edge routers using the IRB 601 MAC address for routing the packet. The IRB 601 MAC addresses on cs-edge-r01 and cs-edge-r02 are different. Because the reth1 interface on the chassis cluster is a single LAG, the reth1 LAG address hashing results in a packet destined to the cs-edge-r01 MAC address being sent to cs-edge-r02. In an MC-LAG configuration, MAC address learning does not occur on the ICL link, and, as a result, cs-edge-r02 floods the packet on bridge domain 601. To avoid flooding on bridge domain 601, specify the MAC address for cs-edge-r01 in the static-mac option on cs-edge-r02 and vice versa. When a packet destined to cs-edge-r01 arrives at cs-edge-r02, cs-edge-r02 sends the packet to cs-edge-r01 using the static MAC address.

  2. Configure an IRB interface on the bridge domain and enable VRRP on the IRB interface.

    On cs-edge-r01, enter:

    On cs-edge-r02, enter:

Configure the High Availability Software

Step-by-Step Procedure

To enable graceful Routing Engine switchover (GRES), nonstop active routing (NSR), and nonstop bridging (NSB):

  • On both cs-edge-r01 and cs-edge-r02, enter the following configuration statements:

Verification

Confirm that the configuration is working properly.

Verifying the High Availability Configuration of the Access Switches

Purpose

Verify the Virtual Chassis, LAG, and high availability software configuration on the access switches.

Action

Perform the following steps for each Virtual Chassis in the access layer:

  1. Verify the Virtual Chassis status.
    user@cs-4300-ab1> show virtual-chassis status
  2. Verify the LACP status of the uplink aggregated Ethernet interface.

    user@cs-4300-ab1> show lacp interfaces
  3. Verify that GRES is enabled by entering the following command on the backup Virtual Chassis member:
    user@cs-4300-ab1> show system switchover

Verifying the High Availability Configuration of the Aggregation Switches

Purpose

Verify the Virtual Chassis, LAG, and high availability software configuration on the EX4600 switches in location B.

Action

  1. Verify the Virtual Chassis status.
    user@cs-agg-01> show virtual-chassis status
  2. Verify the LACP status of the aggregated Ethernet interfaces to the access switches.
    user@cs-agg-01> show lacp interfaces
  3. Verify that GRES is enabled by entering the following command on the backup Virtual Chassis member:
    {backup:1}

    user@cs-agg-01> show system switchover
  4. Verify that nonstop active routing is enabled.
    user@cs-agg-01> show task replication
    Note

    If you have not configured routing yet, you might not see the protocols and their synchronization status listed.

Verifying the High Availability Configuration of the Core Switches

Purpose

Verify the MC-LAG configuration and high availability software configuration on the core switches.

Action

Perform the following steps on both cs-core-sw01 and cs-core-sw02:

  1. Verify that ICCP is configured.
    user@cs-core-sw01> show iccp
  2. Verify that the ICL link has been configured with membership in all the VLANs.
    user@cs-core-sw01> show configuration interfaces ae29
  3. Verify the status of the ICL link.
    user@cs-core-sw01> show interfaces ae29 extensive
  4. Verify that all the MC-LAG interfaces are up.
    user@cs-core-sw01> show interfaces mc-ae
  5. Verify that ICL (ae29) and the MC-LAG interfaces are in the same broadcast domains.

    In following example, the broadcast domain eng1_data_wired is used.

    user@cs-core-sw01> show vlans eng1_data_wired
  6. Verify the status of VRRP.

    1. On cs-core-sw01, enter:
      user@cs-core-sw01> show vrrp summary
    2. On cs-core-sw02, enter:
      user@cs-core-sw02> show vrrp summary
  7. Verify that GRES is enabled.

    1. On the backup Routing Engine of cs-core-sw01, enter:
      user@cs-core-sw01-1> show system switchover
    2. On the backup Routing Engine of cs-core-sw02, enter:
      user@cs-core-sw02-1> show system switchover
  8. Verify that nonstop active routing is enabled.
    user@cs-core-sw01> show task replication
    Note

    If you have not configured routing yet, you might not see the protocols and their synchronization status listed.

Verifying the High Availability Configuration of the Edge Firewalls

Purpose

Verify the chassis cluster configuration and the status of the control, fabric, and redundant Ethernet interfaces.

Action

  1. Verify the chassis cluster configuration and status.
    user@cs-edge-fw01-node0> show chassis cluster status
  2. Verify the status of the control, fabric, and redundant Ethernet interfaces.
    user@cs-edge-fw01-node0> show chassis cluster interfaces

Verifying the High Availability Configuration of the Edge Routers

Purpose

Verify the status of the MC-LAG interfaces and that the router is forwarding traffic to the SRX chassis cluster correctly.

Action

Perform the following steps on both cs-edge-r01 and cs-edge-r02.

  1. Verify the status of the MC-LAG interfaces.
    user@cs-edge-r01> show interfaces mc-ae
  2. Verify that the router is forwarding traffic to the active firewall node, based on the gratuitous ARP message sent by the active node.
    1. Display route information for 172.16.4.0/24.
      user@cs-edge-r01> show route 172.16.4.0/24
    2. Check the forwarding table to see if the next hop and interface are chosen correctly.
      user@cs-edge-r01> show route forwarding-table destination 172.16.4.0/24
  3. Verify the LACP state of the LAG interfaces.

    Both LAGs should be up, even though only the LAG connecting to the active firewall node forwards traffic.

    1. Show the LACP state for interface ae1.
      user@cs-edge-r01> show lacp interfaces ae1
    2. Show the LACP state for interface ae3.
      user@cs-edge-r01> show lacp interfaces ae3
  4. Verify that nonstop active routing is enabled.
    user@cs-edge-r01> show task replication
    Note

    If you have not configured routing yet, you might not see the protocols and their synchronization status listed.