Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Migrating a QFabric System to an EVPN-VXLAN IP Fabric Architecture

 

This configuration example illustrates how to migrate a QFabric System to an EVPN-VXLAN IP Fabric architecture.

It includes the following sections:

Requirements

This document assumes that the QFX3000-M QFabric System, the Ubuntu server, and the MX series routers acting as external gateways are operational. It also assumes that all hardware and cabling for all devices in the EVPN-VXLAN IP Fabric are procured.

Before you begin:

  • Ensure the links connecting your QFabric System to the EVPN-VXLAN IP Fabric have the bandwidth to support all network traffic during the migration.

  • Ensure the MX routers acting as external gateways each have at least two available spare network traffic ports.

    The MX routers are simultaneously connected to the QFabric System and the EVPN-VXLAN IP Fabric for most of this procedure. The additional ports are needed to maintain both connections.

  • Ensure the leaf devices in the EVPN-VXLAN IP Fabric have access ports available to connect to the server.

Overview and Topology

This procedure starts with an operational QFX3000-M QFabric System. The goal of the procedure is to migrate the QFabric System to an EVPN-VXLAN IP Fabric architecture with minimum disruption to network traffic.

Figure 1 illustrates the pre-migration and post-migration topologies.

Figure 1: Migration Topologies
Migration
Topologies

The initial pre-migration topology:

  • Deploys the QFabric System as a pure Layer 2 fabric.

  • Uses MX routers to perform routing functions. The MX routers can function as the Data Center Gateway in the topology, but can also function in other roles.

  • Assumes each MX router has two available spare ports. The MX routers are simultaneously connected to the QFabric System and the EVPN-VXLAN IP Fabric during this procedure; the extra ports are needed to maintain connections to both fabrics.

The QFX3000-M QFabric System includes the following components:

  • Director Group: Two QFX3100 switches running Junos OS Release 14.1X53-D130.1.

  • Redundant server Node group (RSNG): Two QFX3500 switches running Junos OS Release 14.1X53-D130.1.

  • Interconnect: Two QFX3600 switches running Junos OS Release 14.1X53-D130.1.

  • Control Plane: Two EX4200 switches running Junos OS Release 12.3R12-S10.

Figure 2 illustrates the QFX3000-M QFabric System topology.

Figure 2: QFX3000-M QFabric System Topology
QFX3000-M QFabric System
Topology

Table 1 summarizes the QFX3000-M QFabric System components.

Table 1: Pre-Migration QFX3000-M QFabric System Components

HostnameDevice ModelJunos OS VersionRole

QFX3100-1

QFX3100

14.1X53-D130.1

Director device

QFX3100-2

QFX3100

14.1X53-D130.1

Director device

QFX3500-1

QFX3500

14.1X53-D130.1

RSNG member

QFX3500-2

QFX3500

14.1X53-D130.1

RSNG member

QFX3600-1

QFX3600

14.1X53-D130.1

Interconnect device

QFX3600-2

QFX3600

14.1X53-D130.1

Interconnect device

EX4200-1

EX4200

12.3R12-S10

Control plane device

EX4200-2

EX4200

12.3R12-S10

Control plane device

The initial steps of this procedure build a traditional EVPN-VXLAN IP Fabric using QFX5100 series switches as leaf devices and QFX10002 switches as spine devices. The IP Fabric initially runs in parallel to the QFabric System and uses BGP in the underlay network. EVPN-VXLAN runs in the overlay network.

The EVPN-VXLAN IP Fabric includes the following components:

  • Spine Devices: Two QFX10002 switches running Junos OS Release 17.3R3-S1.

  • Leaf Devices: Two QFX5100 switches running Junos OS Release 17.3R3-S1.

Figure 3 illustrates the EVPN-VXLAN IP Fabric topology at the end of this migration.

Figure 3: EVPN-VXLAN IP Fabric Topology
EVPN-VXLAN
IP Fabric Topology

Table 2 summarizes the EVPN-VXLAN IP Fabric components.

Table 2: Post-Migration EVPN-VXLAN IP Fabric Components

HostnameDevice ModelJunos OS VersionRole

QFX5100-1

QFX5100

17.3R3-S1

Leaf Device

QFX5100-2

QFX5100

17.3R3-S1

Leaf Device

QFX10002-1

QFX10002

17.3R3-S1

Spine Device

QFX10002-2

QFX10002

17.3R3-S1

Spine Device

The next step of the migration is connecting the spine devices on the IP Fabric to the RSNG nodes on the QFabric System. Server traffic through the QFabric System is diverted through the EVPN-VXLAN IP Fabric during the migration using this path.

The server is then physically connected to the leaf devices in the IP Fabric and an EVPN LAG is established to connect the server to the leaf devices. Server connections are migrated from the QFabric System to the IP Fabric at this stage of the procedure.

After the server connections are migrated, the external gateways are then migrated to the IP Fabric.

Figure 4 shows an in-progress migration topology at the point where all devices in both fabrics are fully interconnected.

Figure 4: In-progress Migration Topology
In-progress
Migration Topology

The external gateways are two MX480 routers. The external gateway hardware setup and software systems are not altered during the migration.

Table 3 summarizes the external gateways in the topologies.

Table 3: External Gateways

HostnameDevice ModelJunos OS VersionRole

MX480-1

MX480

17.3R3-S3

External Gateway

MX480-2

MX480

17.3R3-S3

External Gateway

The server in this example runs Ubuntu Linux 14.04. The server is connected to the QFabric System at the beginning of the migration and to the EVPN-VXLAN IP Fabric at the end of the migration. The server’s system software and hardware is unchanged as a result of the migration.

A summary of the migration tasks:

  • Task 1: Establish a connection between the fabrics by connecting the QFX10002 switches—the spine device switches in the EVPN-VXLAN IP Fabric—to the QFX3500 switches—the redundant server Node group(RSNG) in the QFX3000-M QFabric System.

  • Task 2: Provision the BGP underlay and overlay networks for the EVPN-VXLAN IP Fabric from the QFX10002 switches functioning as spine devices.

  • Task 3: Connect the leaf devices to the spine devices in the EVPN-VXLAN IP Fabric, and provision the BGP underlay and overlay networks from the QFX5100 switches functioning as leaf devices.

  • Task 4: Provision an EVPN LAG between the fabrics. The EVPN LAG is configured between the spine devices in the EVPN-VXLAN IP Fabric and the RSNG in the QFX3000-M QFabric System. Provision the EVPN LAG to carry all VLAN traffic from the QFabric System to the EVPN-VXLAN IP Fabric. Verify that the leaf nodes in the EVPN-VXLAN IP Fabric are receiving MAC addresses from the QFabric System.

  • Task 5: Provision an EVPN LAG from the leaf nodes in the EVPN-VXLAN IP Fabric to the server.

  • Task 6: Physically connect the server to the leaf nodes in the EVPN-VXLAN IP Fabric.

  • Task 7: Provision an EVPN LAG from the spine devices in the EVPN-VXLAN IP Fabric to the MX480 Routers functioning as the external gateways.

  • Task 8: Migrate the external gateways to the EVPN-VXLAN IP Fabric.

  • Task 9: Decommission the EVPN LAG connecting the fabrics.

Task 1: Establish a Physical Connection Between the Fabrics

Step-by-Step Procedure

Physically connect the QFX10002 switches—the spine devices in the EVPN-VXLAN IP Fabric—to the QFX3500 switches—the redundant server Node group(RSNG) in the QFX3000-M QFabric System—to provide a path for traffic to reach the MX480 routers—the gateway devices—during the migration.

Note

Traffic is diverted from the QFabric System to the EVPN-VXLAN IP Fabric during this procedure. Be sure the links connecting the QFabric System to the EVPN-VXLAN IP Fabric can support the bandwidth to accommodate the network traffic load during the migration.

Task 2: Provision the BGP Underlay and Overlay Networks on the EVPN-VXLAN IP Fabric, and Map All VLANs in the QFabric System to VNIs in the EVPN-VXLAN IP Fabric

The following parameters are configured for the EVPN-VXLAN IP Fabric in this task:

  • the EBGP underlay network is provisioned.

  • the IBGP overlay network is provisioned, and EVPN-VXLAN is established in the overlay network.

  • VNIs in the EVPN-VXLAN network are mapped to the VLANs in the QFabric System.

Figure 5 illustrates the EVPN-VXLAN IP Fabric topology.

Figure 5: EVPN-VXLAN IP Fabric Topology
EVPN-VXLAN
IP Fabric Topology

CLI Quick Configuration

To quickly configure each device in this procedure:

QFX10002-1:

QFX10002-2:

QFX5100-1:

QFX5100-2:

Step-by-Step Procedure

This section provides the step-by-step procedure to provision the EVPN-VXLAN IP Fabric. It describes each step and illustrates how to perform the step on one of the switches in the EVPN-VXLAN IP Fabric.

See CLI Quick Configuration or the Appendix: Migrating a QFabric System to an EVPN-VXLAN Fabric Quick Configuration Procedure for complete configurations that illustrate how each step is applied on each device.

To perform this procedure:

  1. Configure the interface IP address for each spine and leaf device interface in the EVPN-VXLAN topology:

    QFX10002-1 Example:

  2. Configure the loopback address for each spine and leaf device in the EVPN-VXLAN topology.

    QFX10002-1 Example:

  3. Configure the external BGP (EBGP) underlay network parameters on all spine and leaf devices.

    QFX10002-1 Example:

  4. Configure the internal BGP (IBGP) overlay network parameters on all spine and leaf devices.

    QFX10002-1 Example:

  5. Configure the VLANs and the VLAN-to-VNI mappings on all spine and leaf devices.

    QFX10002-1 Example:

  6. Configure EVPN on all spine and leaf devices.

    QFX10002-1 Example:

  7. Configure the switch options on each spine and leaf device in the topology.

    QFX10002-1 Example:

  8. Configure the routing options on each spine and leaf device in the topology.

    QFX10002-1 Example:

  9. Configure the policy options on each spine and leaf device.

    QFX10002-1 Example:

  10. Repeat this procedure on the other devices—QFX10002-1, QFX5100-1, and QFX5100-2—in the EVPN-VXLAN IP Fabric architecture.

    See CLI Quick Configuration or the Appendix: Migrating a QFabric System to an EVPN-VXLAN Fabric Quick Configuration Procedure for complete configurations illustrating how each step is applied on each switch.

Task 3: Configure EVPN LAGs Between the QFX10002 and QFX3500 Switches

An EVPN LAG between the spine devices in the EVPN-VXLAN IP Fabric—the QFX10002 switches—and the RSNG devices in the QFabric System—the QFX3500 switches—is configured in this task. This EVPN LAG provides connectivity between the fabrics.

Figure 6 illustrates this EVPN LAG.

Figure 6: EVPN LAG—Spine Devices in EVPN-VXLAN IP Fabric to RSNG Devices in QFabric System
EVPN LAG—Spine
Devices in EVPN-VXLAN IP Fabric to RSNG Devices in QFabric System

CLI Quick Configuration

To quickly configure each device in this procedure:

QFX10002-1:

QFX10002-2:

QFX3500-1 or QFX3500-2 (Configure on QFabric Director Group) :

Step-by-Step Procedure

To provision the EVPN LAG connecting the EVPN-VXLAN IP Fabric to the QFabric System:

  1. Configure the interfaces on the RSNG nodes in the QFabric System into an aggregated Ethernet interface:

    QFX3500-1 or QFX3500-2 (Master QFabric Director Device):

  2. Configure the spine device interfaces in the EVPN-VXLAN IP Fabric into an aggregated Ethernet interface:

    QFX10002-1:

    QFX10002-2:

  3. Configure the spine device interfaces in the EVPN-VXLAN IP Fabric into the same Ethernet segment with all active forwarding.

    QFX10002-1:

    QFX10002-2:

  4. Enable LACP on the RSNG nodes in the QFabric System and on the spine devices in the EVPN-VXLAN IP Fabric.

    QFX10002-1:

    QFX10002-2:

    QFX3500-1 or QFX3500-2 (Master QFabric Director Group):

  5. Configure the aggregated Ethernet interfaces as trunk ports, and enable the VLANs for the aggregated Ethernet interfaces.

    QFX10002-1:

    QFX10002-2:

    QFX3500-1 or QFX3500-2 (Master QFabric Director Group):

Results

Enter the show ethernet-switching table command to confirm that the leaf devices in the EVPN-VXLAN IP Fabric are receiving MAC addresses from the QFabric System after committing this configuration.

The Ubuntu server MAC address is 00:25:90:ea:2f:2d. The show ethernet-switching table output confirms that this MAC address has been learned on both leaf devices. The learning of this MAC address proves that MAC addresses are getting passed from the QFabric System to the EVPN-VXLAN IP Fabric.

Enter the ifconfig bond0 command to confirm the MAC address of the server, if needed:

Task 4: Provision Server-facing EVPN LAGs on the Leaf Nodes

This task prepares the EVPN LAGs between the leaf devices in the EVPN-VXLAN IP Fabric and the Ubuntu server.

Caution

Do not physically connect the server to the EVPN-VXLAN IP Fabric at this point of the procedure.

The steps in this procedure were carefully selected to minimize unexpected behavior and network downtime, and performing the steps out of order creates complications with both objectives.

The servers are connected to the EVPN-VXLAN IP Fabric in task 6. Figure 7 illustrates the final topology.

Figure 7: EVPN LAG Connecting Server to Leaf Switches
EVPN LAG
Connecting Server to Leaf Switches

CLI Quick Configuration

To quickly configure each device in this procedure:

QFX5100-1:

QFX5100-2:

Step-by-Step Procedure

To provision the server-facing EVPN LAGs from the leaf devices in the EVPN-VXLAN IP Fabric:

  1. Configure the interfaces on the leaf devices in the EVPN-VXLAN IP Fabric into aggregated Ethernet interfaces:

    QFX5100-1:

    QFX5100-2:

  2. Configure the aggregated Ethernet interfaces into an Ethernet Segment. Enable all active forwarding for the links in the Ethernet segment.

    All links on both devices must be in the same ESI.

    QFX5100-1:

    QFX5100-2:

  3. Enable LACP.

    QFX5100-1:

    QFX5100-2:

  4. Configure the interface as an access interface, and associate the member VLAN with the interface.

    QFX5100-1:

    QFX5100-2:

Task 5: (Required When Server Does Not Have Bond Interface Configuration) Configure the Server Bond Interface

In most cases, the server already has a bond interface configuration and this task can be skipped.

Follow this procedure to configure the server bond interface only when the server does not have a configured bond interface.

Step-by-Step Procedure

To configure a server bond interface:

  1. Log into the Ubuntu server.
  2. Enter the sudo nano /etc/network/interfaces command:

    Ubuntu Server

  3. Apply the bond server configuration:

    Ubuntu Server Bond Configuration Example

    Note

    This example configuration illustrates one method of enabling a bond interface using an Ubuntu server. The configuration doesn’t apply to other servers. For information about bond configurations on other servers, see the support resources for your server.

Task 6: Migrate the Server to the EVPN-VXLAN IP Fabric

Both fabrics are running in parallel and interconnected at this point of the procedure.

Figure 8 illustrates this mid-migration architecture.

Figure 8: Mid-Migration Architecture
Mid-Migration
Architecture

The server is migrated from the QFabric System to the EVPN-VXLAN IP Fabric following these task 6 procedures.

Task 6-1: Provision the EVPN LAG Connecting the Leaf Switches in the EVPN-VXLAN IP Fabric to the Server

Step-by-Step Procedure

This step was already performed in task 4. See task 4 if this EVPN-VXLAN has not been provisioned.

Figure 9 illustrates this EVPN LAG.

Figure 9: EVPN LAG Connecting Server to Leaf Switches
EVPN LAG
Connecting Server to Leaf Switches

Task 6-2: Disable the Server-Facing Interface on the Backup RSNG Node

Step-by-Step Procedure

The server-facing interface on the backup RSNG node—QFX3500-2—is disabled during this task.

To disable this server-facing interface, login to the master director device and enter the set interfaces QFX3500-2:ge-0/0/24 disable statement:

QFX3100-1 (master director):

Figure 10 illustrates the server connections after this task is completed.

Figure 10: Disabling the Server-Facing Interface on the Backup RSNG Node
Disabling
the Server-Facing Interface on the Backup RSNG Node

Task 6-3: Disconnect Cable From Server to Backup QFX3500 Switch and Connect Cable to Primary QFX5100 Switch

One leaf device in the EVPN-VXLAN IP Fabric is physically cabled to the server during this task.

Step-by-Step Procedure

To perform this procedure:

Note

The server OS of the migrating server impacts LAG behavior. Pay particular attention during this task to avoid causing Layer 2 traffic loops during this migration.

  1. Disconnect the cable from the interface on the backup QFX3500 switch that connected to the server. This is the interface that was disabled in task 6-2.

    This procedure assumes you are using the same cable to connect the server to the EVPN-VXLAN IP Fabric. If you are using the same cable, there is no need to disconnect the cable on the server side of the link—p6p2 in Figure 11

    If you are using a new cable to connect the server to the EVPN-VXLAN IP Fabric, also disconnect the cable from the interface on the server side of the link.

  2. Select the first leaf device—QFX5100-1 is used in this example—in the EVPN-VXLAN IP Fabric to connect to the server.

    To prevent traffic loops, disable the leaf device interface that is connecting to the server before physically interconnecting the devices.

    QFX5100-1

  3. Physically cable the interface that was disabled in step 2—interface ge-0/0/25 on QFX5100-1—to the server interface—interface p6p2 in Figure 11—that connected the switch to the backup RSNG node.

    Figure 11 illustrates the server connections after this task is completed.

    Figure 11: Server Connections—First Leaf Device Connected
    Server Connections—First
Leaf Device Connected

Task 6-4: Disable Server-Facing Interface on Primary RSNG Node

Step-by-Step Procedure

The server-facing interface on the primary RSNG node—QFX3500-1—is disabled during this task.

To disable this server-facing interface, login to the master director device and enter the set interfaces QFX3500-1:ge-0/0/24 disable statement:

Note

The server is not connected to an active switch interface at this point of the procedure, and is therefore not passing traffic.

QFX3100-1 (master director):

Figure 12 illustrates the server connections after this task is completed.

Figure 12: Server Connections—Primary RSNG Node to Server Connection Disabled
Server
Connections—Primary RSNG Node to Server Connection Disabled

Task 6-5: Disconnect Server to Primary QFX3500 Switch Cable and Reconnect It to Backup QFX5100 Switch

Step-by-Step Procedure

To perform this procedure:

  1. Place the leaf device interface that is already cabled to the server—QFX5100-1—into the administratively up state. The QFX5100-1 switch was administratively disabled during task 6-3.

    QFX5100-1

    The server is now connected to an active switch interface, and restarts passing traffic after this step is committed.

  2. Physically uncable the connection connecting the primary RSNG node—QFX3500-1—to the server. The server is disconnecting interface p6p1 in this step.
  3. Physically cable the interface on the other leaf device—QFX5100-2—to the server using the server interface—p6p1—that was disconnected in the previous step.

    Figure 13 illustrates the server connections after this task is completed.

    Figure 13: Server Connections—Second Leaf Device Connected
    Server Connections—Second
Leaf Device Connected

    The server migration configuration process is complete.

Task 6-6: Verify that the EVPN LAG Connecting Leaf Devices in the EVPN-VXLAN IP Fabric to the Server is Established

Step-by-Step Procedure

After completing the migration, wait at least 3 seconds to allow the servers to re-establish LACP and for the EVPN LAG between the leaf devices in the EVPN-VXLAN IP Fabric to become active.

Once a server has been migrated, the commands below can be used to check if the EVPN LAG has been fully established. Both leaf devices—QFX5100-1 and QFX5100-2—should be set in up upstate in the show interfaces terse command output.

The LACP state for the leaf device interfaces connecting to the server should display as Collecting and Distributing in the show lacp interfaces command output.

Task 6-7: (Optional) Generate Gratuitous ARP Messages from the Server for MAC Address Learning

Step-by-Step Procedure

You can accelerate the MAC address learning process across the network by generating gratuitous ARP messages from the server, if desired.

This task is optional but often helpful. The devices in the network will relearn the MAC address of the server if this process is skipped, but at a slower rate.

To generate the gratuitous ARP message from the server:

Server::

where:

  • 10.25.55.100 is your server’s IP address.

  • -c count is the number of gratuitous ARP packets to send. This value can be customized, and is set to 3 in the sample output.

Task 6-8: Repeat Tasks 6-1 through 6-7 for All Other Servers

Step-by-Step Procedure

Repeat tasks 6-1 through 6-7 for all other servers that are migrating from the QFabric System to the EVPN-VXLAN IP Fabric.

Task 7: Provision the EVPN LAG Connecting the Spine Devices in the EVPN-VXLAN IP Fabric to the Gateway Devices

This task provisions an EVPN LAG that connects the Spine Devices in the EVPN-VXLAN IP Fabric—QFX10002-1 and QFX10002-2—to the external gateways—MX480-1 and MX480-2.

This task establishes a direct path from the EVPN-VXLAN IP Fabric to the MX480 routers. Network traffic can now be sent through the EVPN-VXLAN IP Fabric to the external gateways, completely bypassing the QFabric System.

Note

The spine devices—the QFX10002 series switches—and the external gateways—the MX480 routers—are connected using two additional interfaces on each MX480 routers. The MX480 routers are also connected to the QFabric System at this point of the procedure.

This procedure, therefore, disables the connections between the spine devices and the external gateways to avoid traffic loops.

Task 7-1: Disable the LAG Interfaces Connecting the External Gateways to the Spine Devices

Step-by-Step Procedure

This task disables the traditional LAG interfaces connecting the external gateways to the spine devices. The LAG interfaces are administratively down after this procedure.

Figure 14 illustrates the spine device to external gateway device connections at this point of the procedure.

Figure 14: Spine Device to External Gateway Connections
Spine Device to External Gateway Connections

Disable the interfaces on the external gateways that connect to the spine devices in the EVPN-VXLAN IP Fabric.

If the interfaces are configured into LAG interfaces, disable the LAG interfaces:

MX480-1

MX480-2

Note

The use case in this doc assumes ae3 and ae5 were configured on the external gateways. The non-LAG bundle interface disabling instructions are provided for illustrative purposes only.

If the interfaces were not configured into LAG bundles, disable the interfaces:

MX480-1 Illustration

MX480-2 Illustration

Task 7-2: Provision the EVPN LAG Between the Spine Devices and the External Gateways

CLI Quick Configuration

To quickly configure each device in this procedure:

QFX10002-1:

QFX10002-2:

MX480-1:

MX480-2:

Step-by-Step Procedure

To provision the EVPN LAG connecting the spine devices in the EVPN-VXLAN IP Fabric to the gateway devices:

  1. On the spine devices in the EVPN-VXLAN IP Fabric, configure the interfaces that connect to the primary external gateway into aggregated Ethernet bundles.

    The same aggregated Ethernet interface name—ae3—is used on both spine devices in this topology to simplify network management.

    QFX10002-1:

    QFX10002-2:

  2. On the primary external gateway, configure the interfaces that connect to the spine devices in the EVPN-VXLAN IP Fabric into the same aggregated Ethernet bundle.

    The same aggregated Ethernet interface name—ae3—is also used on the connection from the master external gateway to the spine devices in the EVPN-VXLAN IP Fabric to simplify network management.

    MX480-1:

  3. Configure the links that connect to the primary external gateway on the spine devices in the EVPN-VXLAN IP Fabric into the same Ethernet segment, and configure the links as all active.

    QFX10002-1:

    QFX10002-2:

  4. Enable LACP on both ends of the spine device to primary external gateway connections.

    QFX10002-1:

    QFX10002-2:

    MX480-1:

  5. Configure both sides of the spine device to primary external gateway connects as trunk ports, and configure the VLAN memberships for each aggregated Ethernet interface on these links.

    QFX10002-1:

    QFX10002-2:

    MX480-1:

  6. On the spine devices in the EVPN-VXLAN IP Fabric, configure the interfaces that connect to the backup external gateway into aggregated Ethernet bundles.

    The same aggregated Ethernet interface name—ae5—is used to connect to each other on all devices in this topology to simplify network management.

    QFX10002-1:

    QFX10002-2:

  7. On the backup external gateway, configure the interfaces that connect to the spine devices in the EVPN-VXLAN IP Fabric into the same aggregated Ethernet bundle.

    The same aggregated Ethernet interface name—ae5—is also used on this end of the link to simplify network management.

    MX480-2:

  8. Configure the links that connect to the backup external gateway on the spine devices in the EVPN-VXLAN IP Fabric into the same Ethernet segment, and configure the links as all active.

    QFX10002-1:

    QFX10002-2:

  9. Enable LACP on both ends of the spine device to backup external gateway connections.

    QFX10002-1:

    QFX10002-2:

    MX480-2:

  10. Configure both sides of the spine device to backup external gateway connections as trunk ports, and configure the VLAN memberships for each aggregated Ethernet interface on these links.

    QFX10002-1:

    QFX10002-2:

    MX480-2:

  11. Enable VRRP on the external gateways:

    MX480-1:

    MX480-2:

Task 8: Migrate External Gateway

The final step of the migration is to move the path to the external gateway from the QFabric System to the EVPN-VXLAN IP Fabric.

Figure 15 illustrates the migration topology at this point of the procedure.

Figure 15: Migration Topology Before External Gateway Migration
Migration Topology
Before External Gateway Migration

Complete the following tasks to migrate the external gateway:

Task 8-1: Provision an EVPN LAG Between the Spine Devices in the EVPN-VXLAN IP Fabric and the External Gateways

Step-by-Step Procedure

This task was performed in step 7.

Task 8-2: Disable the Link Connecting the Backup Gateway Device to the RSNG Nodes

Step-by-Step Procedure

Disable the member link on the backup VRRP member—MX480-2—to the RSNG nodes in this step. The member link is moved to the administratively down state after this configuration is committed.

To disable the member link:

MX480-2:

Figure 16 illustrates the external gateway to RSNG node connections after this task.

Figure 16: External Gateway to RSNG Node Connections
External
Gateway to RSNG Node Connections

Task 8-3: Uncable the Backup External Gateway to RSNG Nodes. Activate EVPN LAG from External Gateways to Spine Devices

Step-by-Step Procedure

The connection between the backup external gateway and the RSNG nodes in the QFabric System is disconnected in this task. The EVPN LAG connecting the spine devices in the EVPN-VXLAN IP Fabric to the gateway devices is also activated.

Figure 17 illustrates the topology after this task is completed.

Figure 17: External Gateway Migration to EVPN-VXLAN Topology
External
Gateway Migration to EVPN-VXLAN Topology

To perform this procedure:

  1. Reactivate the EVPN LAG from the backup external gateway.

    MX480-2:

  2. Disconnect the cables connecting the backup external gateway—MX480-2—to the RSNG nodes—QFX3500-1 and QFX3500-2.

Task 8-4: Change VRRP Mastership

Step-by-Step Procedure

The VRRP mastership is changed in this step to make the backup external gateway the master VRRP router.

The EVPN LAG connecting the spine devices in the EVPN-VXLAN IP Fabric to the external gateway connects to the backup external gateway before this task is completed. The purpose of this task is to change the VRRP master router to the device with the EVPN LAG connected to the EVPN-VXLAN IP Fabric.

Figure 18 illustrates the topology after this task is completed.

Figure 18: External Gateway Migration to EVPN-VXLAN Topology—Post-VRRP Mastership Change
External Gateway
Migration to EVPN-VXLAN Topology—Post-VRRP Mastership Change

MX480-2:

Before the migration, the master external gateway—MX480-1—was configured at VRRP priority 200 and the backup external gateway—MX480-2— was configured at VRRP priority 100. These configuration settings were made in step 7-2.

The show vrrp command output now reflects the VRRP mastership change. Note that the VR State for group 55 on irb.55—the interface and group whose priority was changed in this step—is master.

Task 8-5: Uncable the Master External Gateway to RSNG Nodes. Activate the EVPN LAG on the Spine Device

Step-by-Step Procedure

The cable between the master external gateway and the RSNG in the QFabric System is physically disconnected at this point of the procedure.

The external gateways are fully migrated from the QFabric System to the EVPN-VXLAN IP Fabric after this task is finished.

Figure 19 illustrates the migration topology after this step is completed.

Figure 19: External Gateway Migration to EVPN-VXLAN Topology—Connection to Master RSNG Node Removed
External Gateway Migration to EVPN-VXLAN Topology—Connection
to Master RSNG Node Removed

To perform this procedure:

  1. Disable the aggregated ethernet interface connecting the RSNG nodes in the QFabric System to the current backup external gateway:

    MX480-1:

  2. Re-enable the aggregated Ethernet interface connecting the spine devices in the EVPN-VXLAN IP Fabric to the current backup external gateway.

    MX480-1:

  3. You can physically uncable the master RSNG node to master external gateway connection at this point of the procedure, if desired.

Task 9: Decommission the EVPN LAG Connecting the QFabric System and the EVPN-VXLAN IP Fabric

All server traffic is now running through the EVPN-VXLAN IP Fabric. The EVPN LAG that was established in task 3 that connects the spine devices in the EVPN-VXLAN IP Fabric to the QFabric System can now be disabled.

Figure 20 illustrates the topology after this task is complete.

Figure 20: Disabling the EVPN LAG Connecting the EVPN-VXLAN IP Fabric to the QFabric System
Disabling
the EVPN LAG Connecting the EVPN-VXLAN IP Fabric to the QFabric System

To decommission the EVPN LAG connecting the EVPN-VXLAN IP Fabric and the QFabric System:

  1. Disable the EVPN LAG on the RSNG nodes in the QFabric System:

    QFX3100-1 (master director):

  2. Disable the EVPN LAG on the spine devices in the EVPN-VXLAN IP Fabric:

    QFX5100-1:

    QFX5100-2:

  3. Uncable the QFabric System to EVPN-VXLAN IP Fabric connection, if desired.

The migration is complete.

Figure 21 illustrates the post-migration topology:

Figure 21: Post-Migration Topology
Post-Migration
Topology

Verification

The migration should be complete at this point of the procedure.

Figure 22 illustrates the post-migration topology plus a device—the QFX5100-3—that has been added to illustrate how the server can reach a device on the opposite side of the external gateways. This illustration is included as a reference for the verification tasks in this section.

Figure 22: Post-Migration EVPN-VXLAN IP Fabric with New Device
Post-Migration EVPN-VXLAN
IP Fabric with New Device

We recommend running the following verification procedures to validate that the EVPN-VXLAN IP Fabric is operating properly.

Check 1-1: IP Connectivity from the Server

Purpose

Verify that the server has IP reachability to the VIP (Virtual IP Address).

The Virtual IP Address was configured as 10.25.55.1 in task 7-2.

Action

Enter the route -n and ip route commands from the Ubuntu server:

Meaning

The output confirms that the Ubuntu server is using routes 10.25.55.0/24 and 172.25.0.0/16 through interface bond0.

The Virtual IP (VIP) address of the external gateway—10.25.55.1—is in the 10.25.55.0/24 subnet.

The subnet outside of the external gateway—172.25.0.0/16—was configured in task 2 and a device from the subnet is represented by QFX5100-3 in Figure 22.

The output verifies that the server has full connectivity through the EVPN-VXLAN IP Fabric.

Check 1-2: MAC Server Connectivity

Purpose

Verify that the server has MAC connectivity to the post-migration EVPN-VXLAN IP Fabric.

Action

Enter the arp -a command on the server to generate a list of Layer 2 connections.

Meaning

The output illustrates that the server has learned the MAC addresses of the master and backup external gateways, as well as the VIP address of the gateways through interface bond 0.

Check 2-1: Server-Facing EVPN LAG

Purpose

Verify that the EVPN LAG connecting the server to the leaf devices in the EVPN-VXLAN IP Fabric is active.

Action

Enter the show interfaces terse | match ae7 command on each leaf device.

Meaning

The EVPN LAG interfaces are in the up up state, confirming that the interfaces are operational.

Check 2-2: LACP in Server-Facing EVPN LAG

Purpose

Verify that LACP is running in the EVPN LAG connecting the server to the leaf devices in the EVPN-VXLAN IP Fabric.

Action

Enter the show lacp interfaces ae7 command on each leaf device.

Meaning

The output confirms that LACP is active—the Activity state is Active—on every interface in the EVPN LAG.

Check 2-3: EVPN LAG on External Gateway

Purpose

Verify that the EVPN LAGs connecting the spine devices in the EVPN-VXLAN IP Fabric to the external gateways are active, and that LACP is operational on all links.

Action

Enter the show interfaces terse and show lacp interfaces commands to view the EVPN LAGs.

Meaning

The EVPN LAG interfaces are in the up up state, confirming that the interfaces are operational.

The output also confirms that LACP is active—the Activity state is Active—on every interface in the EVPN LAG.

Check 3-1: VRRP Mastership Confirmation

Purpose

Confirm that the backup external gateway is the master VRRP router.

Action

Enter the show vrrp command from the backup external gateway.

Meaning

Before the migration, the master external gateway—MX480-1—was configured at VRRP priority 200 and the backup external gateway—MX480-2— was configured at VRRP priority 100. These configuration settings were made in step 7-2.

The show vrrp command output now reflects the VRRP mastership change. Note that the VR State for group 55 on irb.55—the interface and group whose priority was changed in this step—is master.

Check 3-2: VRRP MAC Learning by Spine Devices in EVPN-VXLAN IP Fabric

Purpose

Confirm that the spine devices in the EVPN-VXLAN IP Fabric are learning the virtual MAC addresses.

Action

MX480-2 (master VRRP member):

Meaning

The spine device—QFX10002-2—learns the MAC address of the external gateway through ae3.

This output provides another method of confirming that the gateway-facing EVPN LAG is operational.