Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

How to Configure Remote Port Mirroring for EVPN-VXLAN Fabrics

About This Network Configuration Example

This network configuration example (NCE) shows how to configure remote port mirroring for EVPN-VXLAN fabrics. Port mirroring copies a traffic flow and sends it to a remote monitoring (RMON) station using a GRE tunnel. Like ERSPAN, remote port mirroring of the tenant traffic with VXLAN encapsulation is often used in the data center environment for troubleshooting or monitoring.

We demonstrate how to add remote port mirroring at lean spine switches and at the edge-routed leaf devices in an EVPN-VXLAN fabric. Our sample fabric is based on an edge-routed bridging (ERB) architecture. VXLAN port mirroring is also supported in centrally-routed bridging (CRB) fabrics.

Use Case Overview

Remote Port Mirroring and EVPN-VXLAN Fabrics

Remote port mirroring copies a traffic flow and delivers the mirrored traffic to one or more mirroring destination hosts. The traffic flow is identified by the ingress or egress interface, in some cases using a firewall filter for added granularity. The mirroring destination hosts are connected to a leaf switch that is part of the same fabric as the source switch.

For EVPN-VXLAN IP fabrics, the mirrored traffic flow is encapsulated in generic routing encapsulation (GRE) and delivered through the underlay IP fabric to the monitoring host's IP address. You use this host as a monitoring station to capture and decode the traffic flow.

Remote port mirroring with a GRE tunnel is a perfect match for IP fabrics like EVPN-VXLAN because you can connect the monitoring station to any leaf switch in the fabric by advertising the host subnet into the EBGP underlay. Additionally, the destination switch connected to the monitoring station doesn’t have to perform GRE de-encapsulation before it delivers the GRE stream to the monitoring station. The GRE tunnel can cross multiple intermediate IP nodes or be sent outside of the fabric.

Port Mirroring Methods: Analyzer Instance and Port Mirroring Instance

Two methods are available for port mirroring: an analyzer instance and a port mirroring instance. Each approach offers different advantages and is compatible with different architectures. In both cases, the GRE tunnel for mirroring purposes is created automatically, so there is no need to configure a GRE tunnel.

Analyzer instance is port mirroring without any filtering criteria. An analyzer is the simplest form of traffic mirroring to implement. You only need to specify the interface that is the source of the mirrored traffic and whether the traffic to be mirrored on that interface is egress, ingress, or both. You also specify the IP address for the destination of the mirrored traffic. There is no need to configure or apply a firewall filter.

Because analyzer instance is not tenant-specific, it is the best approach when you do not have information about the tenant stream.

Port mirroring instance uses tenant traffic-specific criteria to mirror traffic. The administrator of the fabric decides which specific tenant source IP address, protocol, port, or so on, is of interest on the interface being mirrored. Use a port mirroring instance when specific traffic needs to be mirrored.

Juniper Networks supports three main data center architectures. All support remote port mirroring. For each architecture, we recommend the following method of port mirroring:

  • Bridged Overlay: Analyzer instance

  • Centrally-Routed Bridging (CRB): Port mirroring instance

  • Edge-Routed Bridging (ERB): Port mirroring instance

Technical Overview

Remote Port Mirroring in an EVPN-VXLAN ERB Fabric

This NCE covers remote port mirroring for an ERB architecture with a lean spine.

In an EVPN-VXLAN ERB fabric with a lean spine, tenant-specific inner flows can’t be selectively sent into the tunnel. Only outer header-based filtering (underlay) is supported at a lean spine device. For example, a spine switch can filter the packets coming from a specific source IPv4 loopback address and send the traffic to the monitoring station connected to another leaf device using the GRE tunnel.

Remote port mirroring with GRE for tenant-specific flow is supported on leaf devices in an ERB architecture. In this case, you can implement the remote port mirroring filter at the integrated routing and bridging (IRB) interface to mirror inter-VLAN (routed) traffic. To mirror intra-VLAN (bridged) traffic, apply an interface filter to the ingress interface attached to the server or host device.

In either method, spine or leaf, the mirrored traffic flows to the remote analyzer over a GRE tunnel.

Remote Port Mirroring with QFX Series Switches

In the following examples, we demonstrate different ways to use remote port mirroring for an ERB architecture based on QFX Series switches.

QFX5110 and QFX5120 switches perform well as leaf devices in an ERB reference data center architecture because these models can perform inter-virtual network identifier (VNI) routing.

Table 1 shows port mirroring instance type support for various use cases when using QFX10002 and QFX10008 switches. Table 2 shows the same when using QFX5110 and QFX5120 switches.

Table 1: Remote Port Mirroring on QFX10002 and QFX10008

Use Case

Sub Use Case

Analyzer Instance

Port Mirroring Instance

Spine providing IP transit

Ingress from leaf to spine

Supported

Supported

Egress from spine to leaf

Supported

Supported

Spine in a CRB scenario

Ingress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Egress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Border encapsulation

Access ingress of ESI-LAG

Supported

Supported

Border de-encapsulation

Egress of the fabric

Not supported

Not supported

Table 2: Remote Port Mirroring on QFX5110 and QFX5120

Use Case

Sub Use Case

Analyzer Instance

Port Mirroring Instance

Lean spine providing IP transit

Ingress from leaf to spine

Supported

Supported

Egress from spine to leaf

Supported

Not supported

Leaf in an ERB scenario

Ingress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Supported

Egress of IRB that terminates traffic

Only ge, xe, et and ae interfaces are supported

Not supported

Border encapsulation

Access ingress of ESI-LAG

Supported

Supported

Access egress of ESI-LAG

Supported

Not supported

Border de-encapsulation

Ingress of the fabric

Not supported

Not supported

Egress of the fabric

Not supported

Not supported

All QFX switches in this example run Junos OS Release 18.4R2 or higher. QFX5110 and QFX5120 switches running Junos OS Release 18.4R2 support these scaling numbers:

  • Default analyzer sessions: 4

  • Port mirroring sessions: 3 port mirroring sessions + 1 global port mirroring session

This NCE covers the following use cases:

  1. Example: Ingress/Egress Solution for an EVPN-VXLAN ERB Fabric Spine Device shows you how to mirror ingress and egress traffic through a lean spine device in an ERB fabric.

  2. Example: Ingress Solution for an EVPN-VXLAN ERB Fabric Leaf Device demonstrates how to mirror ingress traffic through a leaf device in an ERB scenario.

  3. Example: Enable a Remote Mirroring Instance at an ESI-LAG Interface shows you how to use a port mirroring instance or an analyzer instance in the border encapsulation use case for access to ingress and egress traffic through an ESI-LAG interface.

  4. Example: Enable a Remote Analyzer Instance at an ESI-LAG Interface shows you how to use an analyzer instance to mirror ingress and egress traffic through an ESI-LAG interface.

Use the procedures described in these examples to enable remote port mirroring for your particular use case.

Topology

Requirements

This configuration example uses the following devices running Junos OS Release 18.4R2 or higher. This example was re-validated on QFX switches running Junos 22.2R1.

  • Two QFX5110 switches as lean spine devices.

  • One QFX10002 switch as a border leaf device.

  • Two QFX5110 or 5120 switches as server Leaf 1 and server Leaf 2 devices.

  • One QFX10002 as server Leaf 4. If desired, you can use a QFX5110 or QFX5120 instead of a QFX10002.

  • Two servers that send and receive traffic over the EVPN-VXLAN fabric.

  • A monitoring station equipped with an analyzer application. In this example, we use Wireshark.

Baseline Topology

This NCE focuses on how to add different types of port mirroring to an existing EVPN-VXLAN fabric. Figure 1 details the baseline topology for all of the following examples.

Figure 1: Port Mirroring Baseline Topology Port Mirroring Baseline Topology

Spines 1 and 2 are lean spines devices that don't perform any VXLAN encapsulation or de-encapsulation. Servers 1 and 2 share VLAN 101 and the 10.1.101.0/24 subnet. Server 1 begins single-homed to Leaf 1. In a later sections, we attach it to Leaf 2 to form an ESI-LAG interface. Server 2 attaches to Leaf 4. Leaf 3 functions as a border leaf. Its configuration is the same as those used on the server leaves. Note that in this example, Leaf 4 is a QFX10000 switch attached to the analyzer device on the xe-0/0/33:1 interface.

The topology supports multihoming with ESI-LAG between Server 1 and Leaves 1 and 2. Not all scenarios in this NCE use this multihomed capability. For the full starting device configurations, see Appendix: Full Device Configurations.

Example: Ingress/Egress Solution for an EVPN-VXLAN ERB Fabric Spine Device

Overview

In this example, we enable remote port mirroring on a lean spine switch. The mirrored traffic is sent through a GRE tunnel to the remote monitoring station. This use case is based on the port mirroring instance method.

This example uses an EVPN-VXLAN ERB architecture where unicast inter-virtual network identifier (VNI) routing takes place at the leaf devices. The lean spine devices do not terminate any VXLAN tunnels. They provide IP forwarding and, in the case of an IBGP-based overlay, route reflection capabilities. Our overlay is EBGP-based, so we do not use route reflection.

This configuration mirrors all traffic sent between the leaf devices. The monitoring station captures both intra-VLAN (bridged) and inter-VLAN (routed) traffic for all VLANs on the leaves. The tenant’s specific information is not provisioned at the lean spine devices, but you can enable the mirroring sessions at the lean spine.

Be aware of the capacity of the analyzer port. A large amount of traffic passes through the analyzer port because you are mirroring all overlay traffic sent between the leaf pairing. The leaf devices are attached to the spine with two 40GE interfaces, so they have the capacity to carry high volumes of traffic. Switches have limited buffering capabilities. Frame drops occur if you mirror more traffic than the monitoring station supports.

To add capacity, use an aggregated Ethernet interface with multiple link members. Reduce the traffic load by performing mirroring at the leaf devices and selectively mirroring traffic for only certain tenants.

Topology

This configuration example uses the hardware and software components shown in Requirements.The devices in the topology begin with their baseline configurations as provided in Appendix: Full Device Configurations.

As shown in Figure 2, Spine 1 mirrors ingress and egress overlay traffic flowing between Leaf 1 and Leaf 4 through a firewall filter applied to its et-0/0/7 interface. This interface attaches to server Leaf 1. Overlay traffic is generated by sending ping traffic between Server 1 and Server 2. The mirrored traffic is sent by Spine 1 to the remote monitoring station connected to the xe-0/0/33:1 interface on border Leaf 3.

Figure 2: Topology of Remote Port Mirroring Through Spine Device Topology of Remote Port Mirroring Through Spine Device

The spine uses a GRE tunnel to send the mirrored traffic to the monitoring station. GRE is necessary as the mirrored traffic may be Layer 2, or overlay Layer 3, and therefore not able to be directed routed over the fabric underlay. The GRE tunnel automatically forms once IP reachability towards the monitoring subnet 172.16.1.0/24 is established via the underlay.

Server 1 is single-homed to Leaf 1 in this scenario.

Configuration

Use this example to mirror traffic passing through Spine 1. To mirror all traffic, repeat the configuration on Spine 2. When port mirroring is configured on Spines 1 and 2, traffic is mirrored regardless of which ECMP fabric next hop is selected by the leaf.

  1. (Optional) Deactivate BGP on Spine 2. For our testing purposes, we deactivate the BGP stanza on Spine 2. This way we are able to focus only on Spine 1 and ensure that all traffic between Leaf 1 and Leaf 4 is mirrored.

    After deactivating BGP on Spine 2, Leaf 1 has a single next hop to reach the loopback of Leaf 4. This is through Spine 1 via its et-0/0/48 interface.

  2. Create a mirroring instance at Spine 1 to send the mirrored traffic to the remote monitoring station.

  3. Lean spines are not overlay-aware in an ERB fabric. All VXLAN-encapsulated traffic sent between leaf devices uses the local VTEP address. The VTEP address normally matches the device's loopback address. This means we can use a filter on the spine devices to match on the loopback addresses of the related leaf devices to direct traffic into the port mirroring instance.

    At Spine 1, create firewall filters that match the source and destination IP addresses of Leaf 1 and Leaf 4. By matching on both the source and destination IP, all overlay traffic sent between these leafs is mirrored into the GRE tunnel.

    Note:

    Because the lean spines are not VXLAN-aware, this method mirrors all overlay traffic sent by the leaf device. For example, if the leaf has 100 VLAN or VXLAN VNIs defined, then traffic for all 100 VNIs is mirrored. If you want to mirror at a finer level of granularity, see Example: Ingress Solution for an EVPN-VXLAN ERB Fabric Leaf Device. In that example, the mirror function occurs at the VXLAN-aware leaf device.

    Set a firewall filter for traffic sent from Leaf 1. In our example, all VXLAN traffic sent by Leaf 1 (through the spines) has a source address of 10.1.255.11. All VXLAN traffic sent by Leaf 4 has a source address of 10.1.255.14.

    Set a firewall filter for traffic sent from Leaf 4.

  4. Apply the firewall filters to the Leaf 1 and Leaf 4-facing fabric interfaces at Spine 1.

    Note the directionality of the filter application. Our filters accommodate a QFX5110 platform, which supports only ingress filters for port mirroring, as per Table 2. By using input filters that match on the local leaf's VTEP as a source address and the remote leaf's VTEP as a destination address, we ensure that only overlay traffic between these two leaves is mirrored. If you wish to mirror all VXLAN traffic sent by a leaf, regardless of the destination leaf, simply match on the local leaf's source address in your filter.

  5. (Optional) Add any additional filters.

    In this example so far, we have assumed that Server 1 is single-homed only to Leaf 1. If the server is multihomed to Leaf 1 and Leaf 2, you need to adapt the filters shown to also catch traffic sent from Leaf 2 to Leaf 4, and vice versa.

    As an example, these statements create an input filter for Spine 1's Leaf 2-facing interface:

    A quick modification to the existing port-mirror-from-leaf4 filter is needed to also match on the destination address of Leaf's 2 VTEP.

  6. Commit the changes at Spine 1.

    The changes from the starting EVPN-VXLAN baseline are shown:

  7. Modify the configuration at Leaf 3 to configure the xe-0/0/33:1 interface connected to the monitor station.

    The leaf device connected to the monitoring station does not need explicit GRE configuration. This is because the GRE tunnel terminates at the monitoring station. You must ensure that the monitoring station's IP address is reachable in the fabric underlay. Stated differently, the spine cannot send the GRE encapsulated traffic to the monitor station if it does not have underlay reachability to the monitor station.

  8. Modify the underlay export policy at border Leaf 3 to advertise the monitoring station's prefix. Our topology uses an EBGP underlay. We simply add a new term to the existing underlay export policy.

  9. Commit the changes on border Leaf 3.

    The changes to the starting baseline are shown:

Verification

  1. Verify the underlay connectivity from Spine 1 to the monitoring station.

    On Spine 1, display the route to 172.16.1.0/24.

    Note:

    Strictly speaking, only simple connectivity between the spine and monitoring station is required. We have configured a static route on the monitoring station to allow it to reach the fabric underlay. This permits ping testing for fault isolation.

    Ping the monitoring station to confirm connectivity.

    The output confirms expected underlay reachability from the spine to the monitoring station.

  2. Confirm the port mirroring instances on Spine 1. On Spine 1, verify that the remote port mirroring state is up.

    The output confirms correct definition of the port mirroring instance. The state of up indicates that the spine has an underlay route to the monitoring station. Recall that GRE is a stateless protocol. This is why its a good idea to verify the connectivity between the spine and monitoring station, as we did in the previous step.

  3. Verify filters applied to Spine 1 correctly reflect the test traffic.

    Clear the counters just before generating any pings.

    Start pings between Server 1 and Server 2 over the fabric.

    Now display the firewall counters on Spine 1. Verify the filters applied to Spine 1 correctly reflect the test traffic. Our example topology has only one VLAN and one pair of servers. This makes it easy to correlate test traffic to filter matches. Any non-zero count is a good sign the filter is working.

    The firewall counters correctly reflects the test traffic between the two servers.

  4. Confirm that the traffic sent between Server 1 and Server 2 is mirrored at Spine 1 and sent to the monitoring Station.

    Once again generate traffic between Server 1 and 2 (not shown for brevity). Use the rapid option in your ping command to generate a larger volume of test traffic that makes correlation easier.

    Monitor interface traffic counts on the interface at border Leaf 3 that connects to the monitoring station. Confirm the egress packet counts reflect the generated test traffic.

    The output confirms traffic is mirrored to the monitor station. The lack of input packets is expected as the monitor station does not generate any responses to the received GRE traffic.

  5. Use Wireshark or an equivalent analyzer application to capture and decode the mirrored traffic.

    Figure 3: Mirrored Traffic Analysis Mirrored Traffic Analysis

    The capture shows that Server 1 sends ping traffic to Server 2, and that Server 2 replies. This confirms that overlay traffic sent between the Leaf 1 and Leaf 4 pairing is correctly mirrored at Spine 1. Note that the VXLAN encapsulated traffic (UDP port 4789) is in turn encapsulated into GRE (IP protocol 47). The protocol type of the GRE header is “ERSPAN” (comparable to remote port mirroring) and all the other flags are set to zero.

    The decode allows you to see the leaf VTEP or loopback addresses used in the underlay (10.1.255.x). Also present is the original Layer 2 frame sent by the servers, now shown as encapsulated in VXLAN using VNI 101. The IP packet as sent by the servers is also decoded. The IP addresses used by the servers in the overlay are visible (10.1.100.x/24), as are the details of the IP/ICMP packets they generated.

    You have successfully configured port mirroring on your topology.

Example: Ingress Solution for an EVPN-VXLAN ERB Fabric Leaf Device

Overview

Use the following example to mirror traffic flowing through a leaf device on an EVPN-VXLAN ERB fabric. Unlike the lean spine-based port mirroring scenario covered in the first example, the leaf is tenant- and VLAN-aware in an ERB design. This means we can mirror specific traffic sent between our leaf devices rather than all overlay traffic. The tenant flows that match the filtering criteria on the ERB leaf device are mirrored into a GRE tunnel that terminates at the monitoring station attached to border Leaf 3.

Topology

This configuration example uses the hardware and software components shown in Requirements.The devices in the topology begin with their baseline configurations as provided in Appendix: Full Device Configurations.

Figure 4 shows the topology for remote port mirroring with tenant flow filtering capabilities.

Figure 4: Topology of Remote Port Mirroring Through Leaf Device Topology of Remote Port Mirroring Through Leaf Device

Note that Server 1 is still single-homed only to Leaf 1. The difference is the filters and port mirroring instance are now defined on the leaf device. The goal is to match on all traffic sent by Server 1 to the 10.1.101.0/24 subnet associated with VLAN 101. Traffic sent from Server 1 to VLAN 101 destinations is matched on and sent to the port mirroring station at border Leaf 3.

Configuration

Note:

In the first scenario, we deactivated BGP on Spine 2 to focus on filter application at Spine 1. In the remaining examples, both Spine 1 and Spine 2 are fully operational.

  1. Configure a port mirroring instance at Leaf 1. Use IP address 172.16.1.2 for the monitoring station.

  2. Next, create a firewall filter that matches on the subnet assigned to VLAN 101. The goal is to mirror all intra-VLAN 101 traffic sent by Server 1 through Leaf 1. To catch inter-VLAN traffic, specify the target VLAN's subnet for the destination address.

    Note:

    If you wish to mirror only routed traffic (inter-VLAN), use a family inet filter applied to the VLAN's IRB interface. Only one IRB interface is defined for each VLAN. A filter applied to the VLAN's IRB interface catches all inter-VLAN traffic for the associated VLAN, regardless of which front panel port the traffic arrives on.

    The method shown here works for inter-VLAN and intra-VLAN traffic, but requires filters be applied to server facing interfaces based on their VLAN membership. An inet filter applied to an IRB interface does not see bridged (intra-VLAN) traffic.

  3. Apply the remote port mirroring firewall filter to the access port attached to Server 1 in the input direction.

    Note:

    On QFX5110 and QFX5120 switches, the firewall filter cannot be applied in the output direction or used on the fabric interfaces connected to the spine devices.

  4. Border Leaf 3 does not need to be configured to perform GRE de-encapsulation since the GRE tunnel terminates at the monitoring station. However, Leaf 3 does need to advertise reachability for the monitoring station's subnet into the fabric’s underlay. Configure the following on Leaf 3 to establish reachability to the monitoring station.

    Begin by configuring the xe-0/0/33:1 interface attached to the monitoring station.

  5. Modify the underlay export policy at border Leaf 3 to advertise the monitoring station's prefix. Our topology uses an EBGP underlay. We simply add a new term to the existing underlay export policy.

  6. Commit the configuration.

    The changes from the starting baseline are shown at Leaf 1.

    You have successfully configured remote port mirroring through a leaf device on an EVPN-VXLAN ERB fabric.

Verification

  1. Verify underlay connectivity from Leaf 1 to the monitor station.

    On Leaf 1, display the route to 172.16.1.0/24.

    Note:

    Strictly speaking, only simple connectivity between the leaf and monitoring station is required. We have configured a static route on the monitoring station to allow it to reach the fabric underlay. This permits ping testing to aid in fault isolation.

    Ping the monitoring station to confirm connectivity.

    The output confirms expected underlay reachability from Leaf 1 to the monitoring station. We must source the ping from the leaf's loopback address because our underlay export policy only advertises loopback address reachability. Sourcing from the loopback address was not needed in the lean spine example. This is because the spine and border leaf are directly attached over a fabric link. Thus, the border leaf is able to route the ping reply back to the spine when it is sourced from the spine's leaf facing fabric interface.

  2. Confirm the port mirroring instance on Leaf 1.

    On Leaf 1, verify that the remote port mirroring state is up.

    The output confirms correct definition of the port mirroring instance. The state of up indicates that the leaf has an underlay route to the monitoring station. Recall that GRE is a stateless protocol. This is why its a good idea to verify the connectivity between the leaf and monitoring station, as we did in the previous step.

  3. Verify the filter applied to Leaf 1 correctly reflects test traffic.

    Start pings between Server 1 and Server 2 over the fabric. Verify the filter applied to Leaf 1 correctly reflects the traffic. Our example topology has only one VLAN and one pair of servers. This makes it easy to correlate test traffic to filter matches. Any non-zero count is a good sign the filter is working.

    Clear the counters just before generating any pings.

    Now display the firewall counters on Spine 1.

    The firewall counters correctly reflect the test traffic between the two servers.

  4. Confirm that the traffic sent between Server 1 and Server 2 is mirrored at Leaf 1 to the monitoring station.

    Once again generate traffic between Server 1 and 2 (not shown for brevity). We used the rapid option for the ping command to generate a larger volume of packets to make detection easier.

    Monitor interface traffic counters at border Leaf 3 to verify the test traffic is being sent to the monitoring station.

    The output confirms traffic is mirrored to the monitor station. The lack of input packets is expected because the monitor station does not generate any responses to the GRE traffic it receives.

  5. Use Wireshark or an equivalent analyzer application to capture and decode the mirrored traffic.

    Figure 5: Mirrored Traffic Analysis Mirrored Traffic Analysis

    The capture shows that Server 1 sends ping traffic to Server 2. The reply traffic is not mirrored because we have applied our filter in the input direction at Leaf 1 only. Apply a similar port mirror instance and filter configuration at Leaf 4 to also mirror return traffic. Recall that in the lean spine case the mirrored traffic showed VXLAN encapsulation. In our ERB leaf case we mirror Layer 2 traffic before it has been encapsulated into VXLAN. The filter used to mirror traffic is applied as an input filter on the server-facing interface. At ingress there is no VXLAN encapsulation.

    This is similar to the case of applying the filter to a VLAN's IRB interface. In the IRB case the mirrored traffic does not contain VXLAN encapsulation. IRB based filters process inter-VLAN traffic at the IP layer. Thus, for the IRB filter case only Layer 3 IP traffic is mirrored.

    The decode shows the GRE tunnel is between Leaf 1 and the monitoring station through the underlay. The Ethernet frame and related IP packet, as sent by Server 1, is decoded. The IP addresses assigned to the servers are visible (10.1.100.x/24), as are the details of the IP/ICMP packet being generated by Server 1.

    You have successfully configured port mirroring on your topology.

Example: Enable a Remote Mirroring Instance at an ESI-LAG Interface

Overview

An Ethernet segment identifier (ESI) is the unique 10-byte number that identifies an Ethernet segment in an EVPN-VXLAN fabric connected to the server. The ESI is configured on the interface that connects to the server. When multiple links form a link aggregation group (LAG) and are connected to the server, this forms an ESI-LAG, also known as EVPN LAG, interface. In some cases, you will need to enable port mirroring for all or part of the traffic entering or leaving the ESI-LAG interface.

Use this section to enable remote port mirroring at the ESI-LAG level of an EVPN-VXLAN ERB fabric.

Topology

This configuration example uses the hardware and software components shown in Requirements. The devices in the topology begin with their baseline configurations as provided in Appendix: Full Device Configurations.

In this case we modify the EVPN fabric to support multihoming of Server 1 to leaves 1 and 2. We do this using ESI-LAG, also known as EVPN LAG, interfaces. See EVPN LAG Configuration Best Practices for best practices.

Figure 6 shows the topology for this example. Both spines are operational in this topology. ECMP means that Leaf 1 and Leaf 2 can send traffic through either spine device. To simplify the illustration, we show the traffic and mirror paths with Spine 1 as the focus.

Figure 6: Topology of Remote Port Mirroring at ESI-LAG Interface Topology of Remote Port Mirroring at ESI-LAG Interface

Interface ae0.0 is the ESI-LAG interface attached to Server 1. Server 1 is assigned IP address 10.1.101.10/24, and is associated with VLAN 101. The goal is to filter on, then mirror, traffic sent from Server 1 as it passes through ae0.0 on either leaf. The mirrored traffic is placed into a GRE tunnel and sent to the monitoring station. As before, you must ensure the monitoring station subnet is reachable in the EBGP underlay.

Before You Begin

If you are starting from the baseline configuration shown in Appendix: Full Device Configurations, follow these steps to modify your topology before you begin.

  1. Modify the configuration of Leaf 1 and Leaf 2 to support multihomed attachment of Server 1. Begin by renaming the existing xe-0//0/0 interface to become the new ae0.0 interface. This caries over any configuration from the xe-0/0/0 interface to save some time.

  2. The configuration specifics for the server's aggregated Ethernet interface, often called a bonded interface in Linux, is beyond the scope of this NCE. Note that for the server, there is only a standard aggregated Ethernet configuration. The statements related to ESI-LAG are applied only to the leaf devices.

    Note:

    The starting baseline configuration enables an aggregated Ethernet interface through the set chassis aggregated-devices ethernet device-count 1 configuration statement. You must enable chassis support for aggregated devices or the aggregated interface is not created.

  3. After applying and committing these changes to both leaf devices and the server, confirm the ae0 interface is up on both leaves. The example output below has been shortened for clarity.

    Also verify that Server 1 is able to ping Server 2 (not shown for brevity).

Configuration

This example shows how to enable remote port mirroring at the ESI-LAG level using a remote port mirroring instance. This method supports tenant-specific (within an overlay VLAN) match criteria to selectively mirror traffic.

  1. Configure a port mirroring instance at both Leaf 1 and Leaf 2. Server 1 can send traffic for VLAN 101 over either of its LAG member interfaces. ECMP means that traffic can arrive at either leaf. IP address 172.16.1.2 is the monitoring station.

  2. Next, create a firewall filter that matches on the subnet assigned to VLAN 101 on both leaves. The goal is to mirror all intra-VLAN 101 traffic sent by Server 1 through Leaf 1 or Leaf 2. To catch inter-VLAN traffic, specify the target VLAN's subnet for the destination address.

    If you need to filter on multiple source or destination IP addresses, consider using a source-prefix-list and/or a destination-prefix-list in your filter. This way you can make changes to the prefix list without having to modify the filter definition.

    Note:

    If you want to mirror only routed traffic (inter-VLAN), use a family inet filter applied to the VLAN's IRB interface. Only one IRB interface is defined for each VLAN. A filter applied to the VLAN's IRB interface therefore catches all inter-VLAN traffic for the associated VLAN, regardless of which front panel port the traffic ingresses on.

    The method shown in this example works for inter-VLAN and intra-VLAN traffic, but requires filters be applied to server-facing interfaces based on their VLAN membership. An inet filter applied to an IRB interface does not see bridged (intra-VLAN) traffic.

  3. Apply the remote port mirroring firewall filter to the ae0.0 interface as an input filter on both leaves.

    Note:

    On QFX5110 and QFX5120 switches, the firewall filter cannot be enabled in the outgoing direction or used at the interfaces connected to the spine devices.

  4. Border Leaf 3 does not need to be configured to perform GRE de-encapsulation since the GRE tunnel terminates at the monitoring station. However, Leaf 3 does need to advertise reachability for the monitoring station's subnet into the fabric’s underlay. Configure the xe-0/0/33:1 interface that attaches to the monitoring station.

    Modify the underlay export policy at border Leaf 3 to advertise the monitoring station's prefix. Our topology uses an EBGP underlay. We simply add a new term to the existing underlay export policy.

  5. The changes to the starting baseline are shown at Leaf 1.

    You have successfully configured remote port mirroring through a leaf device on an EVPN-VXLAN ERB fabric.

Verification

  1. Verify underlay connectivity from Leaf 1 to the monitoring station.

    On Leaf 1, display the route to 172.16.1.0/24.

    Note:

    Strictly speaking, only simple connectivity is required between the leaf and monitoring station. We have configured a static route on the monitoring station to allow it to reach the fabric underlay. This permits ping testing as a fault isolation tool.

    Ping the monitoring station to confirm connectivity.

    The output confirms expected underlay reachability from Leaf 1 to the monitoring station. We must source the ping from the leaf's loopback address because our underlay export policy only advertises loopback address reachability. Sourcing from the loopback address is not needed in the lean spine case. This is because the spine and border leaf share a directly connected fabric link.

  2. Confirm the port mirroring instance on Leaf 1.

    On Leaf 1, verify that the remote port mirroring state is up.

    The output confirms correct definition of the port mirroring instance. The state of up indicates that the leaf has an underlay route to the monitoring station. Recall that GRE is a stateless protocol. This is why its a good idea to verify the connectivity between the leaf and monitoring station, as we did in the previous step.

  3. Verify the filter applied to Leaf 1 and Leaf 2 correctly reflects test traffic. Start pings between Server 1 and Server 2 over the fabric.

    Verify the filter applied to Leaf 1 and Leaf 2 correctly reflects the traffic. Our example topology has only one VLAN and one pair of servers. This makes it easy to correlate test traffic to filter matches. Any non-zero count is a good sign the filter is working.

    Because of ECMP behavior, the single ping flow hashes to only one of the aggregated Ethernet interface's member links. This means the test traffic is expected to be sent to one leaf, or the other, but not both. If Server 1 generates numerous flows, expect to see packets forwarded to both leaves using both member interfaces.

    Clear the counters just before generating any pings.

    Now display the firewall counters on Leaf 1 and Leaf 2.

    The output shows no packets are seen at Leaf 1. This indicates that for this flow, the server is hashing to Leaf 2.

    Check the firewall counters at Leaf 2:

    The firewall counter at Leaf 2 correctly reflects the generated test traffic.

  4. Confirm that the traffic sent between Server 1 and Server 2 is mirrored at Leaf 1 and/or Leaf 2 for transmission to the monitoring station.

    Once again generate traffic between Server 1 and 2 (not shown for brevity). Use the rapid option for the ping command to generate a larger volume of packets to make detection easier.

    Monitor interface traffic counters at border Leaf 3 to verify test traffic arrives at the monitoring station.

    The output confirms traffic is sent to the monitor station. The lack of input packets is expected because the monitor station does not generate any responses to the GRE traffic it receives.

  5. Use Wireshark or an equivalent analyzer application to capture and decode the mirrored traffic.

    Figure 7: Mirrored Traffic Analysis Mirrored Traffic Analysis

    The capture shows that Server 1 sends ping traffic to Server 2. The reply traffic is not mirrored because we have applied our filter at Leaf 1 and Leaf 2 in the input direction only. Apply a similar port mirror instance and filter configuration at Leaf 4 to mirror return traffic. Recall that in the lean spine case, the mirrored traffic showed VXLAN encapsulation. In this ERB leaf case, we mirror Layer 2 traffic before it has been encapsulated into VXLAN. The filter used to mirror traffic is applied as an input filter on the server-facing ae0.0 interface. There is no VXLAN encapsulation at ingress.

    This is similar to the case of applying a the filter to the VLAN's IRB interface. In the IRB filtering case, the mirrored traffic also does not contain VXLAN encapsulation. IRB filters only process inter-VLAN traffic at the IP layer. Thus, for the IRB filter case, only Layer 3 IP traffic is mirrored.

    The decode shows the GRE tunnel is between Leaf 2 and the monitoring station through the underlay. The Ethernet frame and IP packet sent by Server 1 is also decoded. The IP addresses assigned to the servers are visible (10.1.100.x/24), as are the details of the IP/ICMP packet generated by Server 1.

    You have successfully configured port mirroring on your topology.

Example: Enable a Remote Analyzer Instance at an ESI-LAG Interface

Overview

An Ethernet segment identifier (ESI) is the unique 10-byte number that identifies an Ethernet segment. The ESI is enabled at the interface connected to the server. When multiple links form a link aggregation group (LAG) the result is an ESI-LAG, also known as EVPN LAG, interface. In some cases, you will need to enable port mirroring directly for all or part of the traffic entering or leaving the ESI-LAG interface.

Both egress and ingress analyzer instances are supported for ESI-LAG interfaces on leaf devices in an EVPN-VXLAN fabric. This is useful in a data center where the administrator needs to send all the traffic entering or leaving a given ESI-LAG port to, or from, a remote host. Use this section to enable an analyzer instance at the ESI-LAG level of an EVPN-VXLAN ERB fabric.

An analyzer instance differs from a port mirroring instance in that the analyzer works in the input, in the output, or in both directions. Also, it mirrors all traffic on the interface. For example, if Server 1 has 10 VLANs using a trunk interface, traffic from all 10 VLANs is sent to the monitoring station. In the previous port mirroring example, a filter is used to allow selective mirroring of traffic from one VLAN or from particular IP addresses in a VLAN.

Analyzer instances don't use firewall filters for granular matching. All traffic in the specified direction on the specified interface is mirrored.

Topology

This configuration example uses the hardware and software components shown in Requirements.The devices in the topology begin with their baseline configurations as provided in Appendix: Full Device Configurations.

In this example an ESI-LAG, also known as an EVPN LAG, interface is used between Server 1 and Leaf 1 and Leaf 2. See EVPN LAG Configuration Best Practices for best practices.

Note:

You can configure the analyzer instance on an ACX7100 switch running Junos OS Evolved 22.3R1 or later.

Figure 8 shows the topology for this example.

Figure 8: Topology of Remote Analyzer on an ESI-LAG Interface Topology of Remote Analyzer on an ESI-LAG Interface

Interface ae0.0 is the ESI-LAG interface attached to Server 1. Server 1 is assigned IP address 10.1.101.10/24, and is associated with VLAN 101. The goal is to configure an analyzer instance to forward all traffic sent and received on the ae0.0 interface at both Leaf 1 and Leaf 2. The mirrored traffic is placed into a GRE tunnel and sent to the monitoring station. As before, you must ensure the monitoring station subnet is reachable in the EBGP underlay.

Note:

Both spines are operational in this topology. ECMP means that leaf 1 and leaf 2 can send traffic through either spine device. To reduce clutter in the drawing we show the traffic and mirror paths with the Spine 1 device as the focus.

Before You Begin

If you are starting from the baseline configuration shown in Appendix: Full Device Configurations, follow these steps to modify your topology before you begin.

  1. Modify the configuration of Leaf 1 and Leaf 2 to support multihomed attachment of Server 1. We begin by renaming the existing xe-0//0/0 interface to become the new ae0.0 interface. This caries over any configuration from the xe-0/0/0 interface to save some time.

    The configuration specifics for the server's aggregated Ethernet interface, often called a bonded interface in Linux, is beyond the scope of this NCE. Of note is that, for the server, there is only a standard aggregated Ethernet configuration. The statements related to ESI-LAG are applied only to the leaf devices.

    Note:

    The starting baseline configuration enables an aggregated Ethernet device using the set chassis aggregated-devices ethernet device-count 1 configuration statement. You must enable chassis support for aggregated devices or the aggregated interface is not created.

  2. After applying and committing these changes to both leaf devices and the server, confirm the ae0 interface is up on both leaves. The example output below has been shortened for clarity.

    Verify that Server 1 is able to ping Server 2 (not shown for brevity).

Configuration

This example shows how to enable remote port mirroring at the ESI-LAG level using a remote analyzer instance.

  1. The ae0.0 interface is the ESI-LAG interface where the tenant Server 1 connects. This ESI-LAG terminates at both Leaf 1 and Leaf 2. Configure the analyzer instance on both leaves for the ae0.0 interface. Note that you configure a remote analyzer for both the input and output directions.

  2. Border Leaf 3 does not need to be configured to perform GRE decapsulation since the GRE tunnel terminates at the monitoring station. However, Leaf 3 does need to advertise reachability for the monitoring station's subnet into the fabric’s underlay.

    Configure the xe-0/0/33:1 interface that attaches to the monitoring station.

    Modify the underlay export policy at border Leaf 3 to advertise the monitoring station's prefix. Our topology uses an EBGP underlay. We simply add a new term to the existing underlay export policy.

  3. The changes from the starting baseline are shown at Leaf 1.

    You have successfully configured remote port mirroring through a leaf device on an EVPN-VXLAN ERB fabric.

Verification

  1. Verify underlay connectivity from Leaf 1 and Leaf 2 to the monitor station.

    On Leaf 1, display the route to 172.16.1.0/24.

    Note:

    Strictly speaking, only simple connectivity is required between the leaf and monitoring station. We have configured a static route on the monitoring station to allow it to reach the fabric underlay. This permits ping testing as a fault isolation tool.

    Ping the monitoring station to confirm connectivity.

    The output confirms expected underlay reachability from Leaf 1 to the monitoring station. We must source the ping from the leaf's loopback address because our underlay export policy only advertises loopback address reachability. Sourcing from the loopback address was not needed in the lean spine example because the spine and border leaf are directly attached through a shared fabric link subnet.

  2. Confirm the analyzer instance on Leaf 1 and Leaf 2.

    On Leaf 1, verify that the remote analyzer state is up.

    The output confirms correct definition of the analyzer instance. The state of up indicates that the leaf has an underlay route to the monitoring station. Recall that GRE is a stateless protocol. This is why its a good idea to verify the connectivity between the leaf and monitoring station, as we did in the previous step.

  3. Confirm the traffic sent between Server 1 and Server 2 is mirrored by Leaf 1 and/or Leaf 2 to the monitoring station.

    Once again generate traffic between Server 1 and 2 (not shown for brevity). We used the rapid option for the ping command to generate a larger volume of packets to make detection easier.

    Monitor interface traffic counters at border Leaf 3 to verify test traffic arrives at the monitoring station.

    The output confirms traffic arrives at the monitor station. The lack of input packets is expected as the monitor station does not generate any responses to the GRE traffic it receives.

  4. Use Wireshark or an equivalent analyzer application to capture and decode the mirrored traffic.

    Figure 9: Mirrored Traffic Analysis Mirrored Traffic Analysis

    The capture shows that Server 1 sends ping traffic to Server 2. Due to the analyzer instances being applied to both directions of the ae0.0 interface, the reply traffic is also mirrored. Because the analyzer works at the interface level, LACP, which is a link level protocol used on the aggregated Ethernet interface, is also mirrored. Recall that in the lean spine case the mirrored traffic showed VXLAN encapsulation. In this ERB leaf case we capture Layer 2 traffic before its been encapsulated into VXLAN. At ingress there is no VXLAN encapsulation.

    This is similar to the case of applying a the filter to a VLAN's IRB interface. In that case the mirrored traffic also does not contain VXLAN encapsulation. IRB based filters only catch inter-VLAN traffic at the IP layer. Thus, for the IRB filter case, only non-VXLAN encapsulated Layer 3 IP traffic is mirrored.

    The decode shows the GRE tunnel is between Leaf 2 (10.1.12.2) and the monitoring station through the underlay. The Ethernet frame and related IP packet, as sent by Server 1, is also decoded. The IP addresses assigned to the servers are visible (10.1.100.x/24), as are the details of the IP/ICMP packet generated by Server 1. The reply from Server 2 is also mirrored because the analyzer instance is applied bidirectionally in our example.

    You have successfully configured port mirroring on your topology.