Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Configure Remote Port Mirroring for EVPN-VXLAN Fabrics

 

About This Network Configuration Example

This network configuration example (NCE) shows how to configure remote port mirroring for EVPN-VXLAN fabrics. Port mirroring in this configuration copies the traffic flow and sends it to a remote monitoring station using a GRE tunnel. Comparable to ERSPAN, remote port mirroring of the tenant traffic with encapsulation is often used in the data center environment for troubleshooting or monitoring.

We demonstrate how to introduce remote port mirroring at lean spine switches and at the edge-routed leaf devices in an EVPN-VXLAN fabric.

Use Case Overview

Remote Port Mirroring and EVPN-VXLAN Fabrics

Remote port mirroring copies a traffic flow and delivers the mirrored traffic to one or more mirroring destination hosts. The traffic flow is identified by the one or more source ports it travels through. The mirroring destination hosts are connected to a switch that is part of the same fabric as the source switch.

For EVPN-VXLAN IP fabrics, the mirrored traffic flow at the source switch is encapsulated in generic routing encapsulation (GRE) and delivered through the underlay IP fabric to the destination host IP address. You can use this host as a monitoring station to remotely view the traffic flow.

Remote port mirroring with a GRE tunnel is a perfect match for IP fabrics like EVPN-VXLAN because you can connect the monitoring station to any of the nodes of the fabric by advertising the host subnet into the EBGP underlay. Additionally, the destination switch that is connected to the monitoring station doesn’t have to perform GRE decapsulation before it delivers the GRE stream to the monitoring station. The GRE tunnel can cross multiple intermediate IP nodes or be sent outside of the fabric.

Port Mirroring Methods: Analyzer Instance and Port Mirroring Instance

There are two methods for port mirroring: analyzer instance and port mirroring instance. Each approach offers different advantages and is compatible with different architectures. In both cases, the GRE tunnel for mirroring purposes will be created automatically so there is no need to configure a GRE tunnel.

Analyzer instance is port mirroring without any filtering criteria. This is the simplest form of traffic mirroring to implement. You only need to specify the interface that is the source of the mirrored traffic, whether the traffic to be mirrored on that interface is egress, ingress, or both, and the IP address for the destination of the mirrored traffic. There is no need to specify a firewall filter.

Because analyzer instance is not tenant-specific, it is the best approach when you do not have information about the tenant stream.

Port mirroring instance uses tenant traffic-specific criteria to mirror traffic. The administrator of the fabric decides which specific tenant source IP address or TCP port on the interface will be mirrored. Use port mirroring instance when specific traffic needs to be mirrored.

Juniper Networks supports three main data center architectures. For each of them, we recommend the following approach to remote port mirroring:

  • Edge-Routed Bridging (ERB): Port mirroring instance

  • Centrally-Routed Bridging (CRB): Port mirroring instance

  • Bridged Overlay: Analyzer instance

Technical Overview

Remote Port Mirroring For an EVPN-VXLAN ERB Fabric

This NCE covers remote port mirroring for an ERB architecture with a lean spine.

In an EVPN-VXLAN ERB fabric with a lean spine, the tenant-specific inner flow can’t be selectively sent into the tunnel. Only outer header-based filtering can be performed at the lean spine devices. For example, a spine switch can filter the packets coming from a specific source IPv4 loopback address and send them to the host connected to another leaf device using the GRE tunnel.

Remote port mirroring with GRE for tenant-specific flow is supported on leaf devices in this architecture. You can implement the remote port mirroring filter at the integrated routing and bridging (IRB) virtual gateway address (VGA) interface and send the mirrored traffic to the remote host through the GRE tunnel.

Remote Port Mirroring With QFX Series Switches

In the following examples, we demonstrate different ways to use remote port mirroring for an ERB architecture with a lean QFX10002 spine and QFX5100, QFX5110, and QFX5120 switches as leaf devices. QFX5110 and QFX5120 switches perform well as leaf devices in an ERB reference data center architecture because they can perform inter-virtual network identifier (VNI) routing.

Table 1 shows port mirroring instance type support for various use cases when using QFX10002 and QFX10008 switches. Table 2 shows the same when using QFX5110 and QFX5120 switches.

Table 1: Remote Port Mirroring on QFX10002 and QFX10008

Use Case

Sub Use Case

Analyzer Instance

Port Mirroring Instance

Spine providing IP transit

Ingress from leaf to spine

Supported

Supported

Egress from spine to leaf

Supported

Supported

Spine in a CRB scenario

Ingress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Egress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Border encapsulation

Access ingress of ESI-LAG

Supported

Supported

Access egress of ESI-LAG

Supported

Supported

Border decapsulation

Ingress of the fabric

Not supported

Not supported

Egress of the fabric

Not supported

Not supported

Note

If a QFX10000 Series switch originates a VXLAN tunnel with tenant traffic, outgoing analyzer instance and port mirroring instance are not supported, but transit VXLAN tunnel analyzer instance and port mirroring instance are supported.

Table 2: Remote Port Mirroring on QFX5110 and QFX5120

Use Case

Sub Use Case

Analyzer Instance

Port Mirroring Instance

Lean spine providing IP transit

Ingress from leaf to spine

Supported

Supported

Egress from spine to leaf

Supported

Not supported

Leaf in an ERB scenario

Ingress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Supported

Egress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Border encapsulation

Access ingress of ESI-LAG

Supported

Supported

Access egress of ESI-LAG

Supported

Not supported

Border decapsulation

Ingress of the fabric

Not supported

Not supported

Egress of the fabric

Not supported

Not supported

In these examples, we use QFX5110 and QFX5120 switches running Junos OS Release 18.4R2. The following number of analyzer and port mirroring sessions are supported on QFX5110 and QFX5120 switches running this Junos OS Release:

  • Default analyzer sessions: 4

  • Port mirroring sessions: 3 port mirroring sessions + 1 global port mirroring session

The examples in this NCE cover the following use cases:

Use the procedures described in these examples to enable remote port mirroring in your particular use case.

Configuration Example: Ingress/Egress Solution for an EVPN-VXLAN ERB Fabric Spine Device

Requirements

This example uses the following devices running Junos OS Release 18.4R2:

  • Two QFX10002 switches as spine devices.

  • One QFX10002 switch as a border leaf device.

  • Three QFX5110 switches as leaf devices.

  • Two QFX5100 switches as customer edge (CE) switches.

  • Two host devices to send and receive traffic.

  • Two monitoring stations equipped with an analyzer application. In this example, we use Wireshark.

QFX5120 switches also work well as leaf devices in this topology.

Overview

In this example, we enable remote port mirroring on a spine switch and send the traffic through GRE tunnels to two monitoring stations. We use the port mirroring instance method.

This example uses an EVPN-VXLAN ERB architecture where unicast inter-virtual network identifier (VNI) routing takes place at the leaf devices and multicast inter-VNI routing takes place at the border leaf device. The lean spine devices do not terminate any VXLAN tunnels. They deliver IP forwarding and route reflection capabilities. The tenant’s specific information is not provisioned at the lean spine devices, but the mirroring sessions can be enabled at these nodes as well.

Topology

As shown in Figure 1, Spine 1 mirrors ingress and egress packets between Host A and Host B flowing through interface et-0/0/32 towards two different remote mirroring destinations. Monitoring Station 1 is connected to the border leaf port et-0/0/18. Monitoring Station 2 is connected to leaf port et-0/0/51.

The GRE tunnels between Spine 1 and both monitoring stations are created automatically once IP reachability towards subnets 172.20.1.0/24 and 172.21.1.0/24 is established via the underlay.

Figure 1: Topology of Remote Port Mirroring Through Spine Device
Topology of Remote Port Mirroring
Through Spine Device

Configuration

Step-by-Step Procedure

  1. Begin by creating two port mirroring instances at Spine 1 to send the mirrored traffic to two different monitoring stations.
  2. When traffic from Host A arrives at Leaf 1 or traffic from Host B arrives at Leaf 2, the leaf device encapsulates the traffic in a VLXAN tunnel. Devices in the fabric see this traffic as having a source IP address of the leaf device.

    At Spine 1, create firewall filters that match the source IP address in the outer header for the VXLAN-encapsulated end host traffic. Only the packets appearing to originate at that particular leaf device are mirrored into the GRE tunnel.

  3. Apply the firewall filters to the interface on Spine 1 where Leaf 2 is connected.
  4. The leaf devices connected to the monitoring stations do not need to be configured to perform GRE decapsulation since the GRE tunnel terminates at the monitoring stations. However, the leaf devices do need to advertise reachability for the monitoring stations into the fabric’s underlay.

    Configure the following at the Border Leaf, which connects to Monitoring Station 1.

    First, configure the interface where the Border Leaf is connected to Monitoring Station 1.

    Second, add a term with the monitoring station subnet into the existing underlay routing policy.

    Third, verify that the export policy is applied to the underlay BGP group.

  5. Configure the following at Leaf 3, which connects to Monitoring Station 2.

    First, configure the interface where Leaf 3 is connected to Monitoring Station 2.

    Second, add a term with the monitoring station subnet into the existing underlay routing policy.

    Finally, verify that the export policy is applied to the underlay BGP group.

  6. On Spine 1, verify that the remote port mirroring state is up.
  7. At the mirrored session sent to destination host Monitoring Station 1 (connected to et-0/0/18 of the Border Leaf), the following tcpdump was observed using Wireshark. As we can see, the protocol type of the GRE header is “ERSPAN” (comparable to remote port mirroring) and all the other flags are set to zero. In the tcpdump, observe that the Host B to Host A traffic, as sent from Leaf 2 to Leaf 1, is mirrored at Spine 1.
  8. The second mirrored session was sent to the destination host Monitoring Station 2 (connected to et-0/0/51 of Leaf 3). The following tcpdump appears on Wireshark running on Monitoring Station 2. We can see that the original VXLAN encapsulated traffic originated at Leaf 1, which has the loopback address 192.168.1.7.

    You have successfully configured remote port mirroring through a spine device on an EVPN-VXLAN ERB fabric.

Configuration Example: Ingress Solution for an EVPN-VXLAN ERB Fabric Leaf Device

Requirements

This example uses the following hardware and software components:

Overview

Use the following example to mirror traffic flowing through a leaf device on an EVPN-VXLAN ERB fabric. We send the tenant flow entering an ERB leaf device into a GRE tunnel destined for the monitoring station connected to Leaf 4, the QFX5120 switch.

Topology

Figure 2 shows a topology that enables remote port mirroring with tenant flow filtering capabilities. Traffic from Host A to Host B is filtered and sent into the port mirroring session at Leaf 2. The mirrored traffic is directed to Monitoring Station 3.

Figure 2: Topology of Remote Port Mirroring Through Leaf Device
Topology of Remote Port Mirroring
Through Leaf Device

Configuring Remote Port Mirroring Through a Leaf Device

Step-by-Step Procedure

  1. Implement the following port mirroring configuration at Leaf 2, which is the source leaf switch of the remote port mirroring session. Use the output IP address 172.22.1.2 for Monitoring Station 3.
  2. Next, create a firewall filter that matches on Host A’s source IP address and redirects matching traffic into the port mirroring session to be mirrored into the GRE tunnel.
  3. Apply the remote port mirroring firewall filter to the IRB interface related to Host B’s subnet.
    Note

    On QFX5110 and QFX5120 switches, the firewall filter cannot be enabled in the outgoing direction or used at the interfaces connected to the spine devices.

  4. Leaf 4, which is connected to Monitoring Station 3, does not need to be configured to perform GRE decapsulation since the GRE tunnel terminates at the monitoring stations. However, Leaf 4 does need to advertise reachability for the monitoring station into the fabric’s underlay. Configure the following on Leaf 4 to establish Monitoring Station 3 host reachability.

    First, configure the interface where Leaf 4 connects to Monitoring Station 3.

  5. Second, add the monitoring station subnet into the existing underlay routing policy.
  6. Finally, verify that the export policy is applied to the underlay BGP group.
  7. Use the following command to verify the State is set to up and the remote port mirroring was successful on the source switch, Leaf 2:

    You have successfully configured remote port mirroring through a leaf device on an EVPN-VXLAN ERB fabric.

Configuration Example: Remote Port Mirroring at the ESI-LAG Interface

Requirements

This example uses the same devices used in Configuration Example: Ingress Solution for an EVPN-VXLAN ERB Fabric Leaf Device .

This example is for when your EVPN fabric has already been configured using ESI-LAG, also known as EVPN LAG, interfaces. See EVPN LAG Configuration Best Practices for best practices.

Overview

An Ethernet segment identifier (ESI) is the unique 10-byte number that identifies an Ethernet segment in an EVPN-VXLAN fabric connected to the server. The ESI is enabled at the interface connected to the server. When multiple links form a link aggregation group (LAG) and are connected to the server, this forms an ESI-LAG, also known as EVPN LAG, interface. In some cases, you will need to enable port mirroring directly for all or part of the traffic entering or leaving the ESI-LAG interface.

Use this section to enable remote port mirroring at the ESI-LAG level of an EVPN-VXLAN ERB fabric.

Topology

Figure 3 shows the topology for both of the following examples. Host B, the tenant server source, has IP address 172.17.1.10. Interface ae0.0 is the ESI-LAG interface where the tenant hosts are connected. Monitoring Station 1 is the end host. You will advertise the subnet of Monitoring Station 1 into the EBGP underlay.

Figure 3: Topology of Remote Port Mirroring at ESI-LAG Interface
Topology of Remote Port
Mirroring at ESI-LAG Interface

Configuration

The first example shows how to enable remote port mirroring at the ESI-LAG level using a remote port mirroring instance and tenant-specific match criteria. The second example shows how to enable remote port mirroring at the ESI-LAG level using a remote analyzer instance.

Enable Remote Mirroring Instance at the ESI-LAG Interface

Use the following configurations to implement ESI-LAG interface-specific remote port mirroring on Leaf 2. The traffic sent from Host B to Host A is mirrored to Monitoring Station 1.

Step-by-Step Procedure

  1. On Leaf 2, create a port mirroring instance to send the mirrored traffic to Monitoring Station 1.
  2. Create a routing policy prefix list containing the IP addresses in the Host B subnet.
  3. Filter the traffic to ensure only the packets originating from Host B are mirrored into the GRE tunnel.
  4. Apply the firewall filter to the ESI-LAG interface on Leaf 2.

Enable a Remote Analyzer Instance at the ESI-LAG Interface

Step-by-Step Procedure

An egress and ingress analyzer instance can also be used with ESI-LAG interfaces on leaf devices in an EVPN-VXLAN fabric. This is useful in a data center where the administrator needs to send all the traffic entering or leaving a given ESI-LAG port to a remote host.

  1. The ae0.0 interface is the ESI-LAG interface where the tenant hosts, Host A and Host B, are connected. On Leaf 2, configure the analyzer instance at this interface.
  2. On the Border Leaf, use the following configuration to advertise the subnet of Monitoring Station 1 into the fabric underlay .

    First, configure the interface where the Border Leaf connects to Monitoring Station 1.

    Second, add the monitoring station subnet into the existing underlay routing policy.

    Finally, verify that the export policy is applied to the underlay BGP group.

    All the traffic entering or leaving ae0.0 will be mirrored to Monitoring Station 1.