Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Inline Monitoring Services Configuration

Understanding Inline Monitoring Services

Benefits of Inline Monitoring Services

Flexible—Inline monitoring services allow different inline-monitoring instances to be mapped to different firewall filter terms, unlike in traditional sampling technologies, where all the instances are mapped to the Flexible PIC Concentrator (FPC). This provides you with the flexibility of sampling different streams of traffic at different rates on a single interface.

Packet format agnostic—Traditional flow collection technologies rely on packet parsing and aggregation by the network element. With inline monitoring services, the packet header is exported to the collector for further processing, but without aggregation. Thereby, you have the benefit of using arbitrary packet fields to process the monitored packets at the collector.

Inline Monitoring Services Feature Overview

Service providers and content providers typically require visibility into traffic flows in order to evaluate peering agreements, detect traffic anomalies and policy violations, and monitor network performance. To meet these requirements, you would traditionally export aggregate flow statistics information using Netflow, JFlow, or IPFIX variants.

As an alternative approach, you can have the packet content sampled, add metadata information, and export the monitored packets to an collector. The inline monitoring services enables you to do this on MX Series routers with MPCs excluding MPC10E and MPC11E linecards.

With inline monitoring services, you can monitor every IPv4 and IPv6 packet on both ingress and egress directions of an interface. Junos OS encapsulates the monitored traffic in an IPFIX format and exports the actual packet up to the configured clip length to an collector for further processing. By default, Junos OS supports a maximum clip length of 126 bytes starting from the Ethernet header.

Figure 1 illustrates the IPFIX format specification.

Figure 1: Inline Monitoring IPFIX SpecificationInline Monitoring IPFIX Specification

The IPFIX header and IPFIX payload are encapsulated using IP or UDP transport layer. The exported IPFIX format includes two data records and two data templates that are exported to every collector:

  • Data record—Includes incoming and outgoing interface, flow direction, data link frame section, and data link frame size. This information is sent to the collector only when sampled packets are being exported.

    Figure 2 is a sample illustration of IPFIX data record packet.

  • Option data record—Includes system level information, such as exporting process ID, and sampling interval. This information is sent to the collector periodically, irrespective of whether sampling packets are being exported are not.

    Figure 3 is a sample illustration of IPFIX option data record packet.

    Table 1: Information Element fields in IPFIX Option Data Packet

    Number

    Information Element ID

    Information Element Length

    Details

    1

    144

    4B

    Observation domain ID - An unique identifier of exporting process per IPFIX device. Purpose of this field is to limit the scope of other information element fields.

    2

    34

    4B

    Sampling interval at which the packets are sampled. 1000 indicates that one of 1000 packets is sampled.

  • Data template—Includes five information elements:

    • Ingress interface

    • Egress interface

    • Flow direction

    • Data link frame size

    • Variable data link frame selection

    Figure 4 is a sample illustration of IPFIX data template packet.

  • Option data template—Includes flow exporter and sampling interval information.

    Figure 5 is a sample illustration of IPFIX option data template packet.

When there is a new or changed inline monitoring services configuration, periodic export of data template and option data template is immediately sent to the respective collectors.

Figure 2: IPFIX Data RecordIPFIX Data Record
Figure 3: IPFIX Option Data RecordIPFIX Option Data Record
Figure 4: IPFIX Data TemplateIPFIX Data Template
Figure 5: IPFIX Option Data TemplateIPFIX Option Data Template

Inline Monitoring Services Configuration Overview

You can configure a maximum of sixteen inline-monitoring instances that support template and collector-specific configuration parameters. Each inline monitoring instance supports up to four collectors (maximum of 64 collectors in total), and you can specify different sampling rates under each collector configuration. Because of this flexibility, the inline monitoring services overcome the limitations of traditional sampling technologies, such as JFlow, sFlow, and port mirroring.

To configure inline monitoring:

  1. You must include the inline-monitoring statement at the [edit services] hierarchy level. Here you specify the template and inline monitoring instance parameters. You must specify the collector parameters under the inline-monitoring instance.

  2. Specify arbitrary match conditions using a firewall filter term and an action to accept the configured inline-monitoring instance. This maps the inline-monitoring instance to the firewall term.

  3. Map the firewall filter under the family inet or inet6 statement using the inline-monitoring-instance statement at the [edit firewall filter name then] hierarchy level. Starting in Junos OS Release 21.1R1, you can also map the firewall filter under the family any, bridge, ccc, mpls, or vpls statements. You can also alternatively apply the firewall filter to a forwarding table filter with input or output statement to filter ingress or egress packets, respectively.

Remember:

  • The device must support a maximum packet length (clip length) of 126 bytes to enable inline monitoring services.

  • You cannot configure more than 16 inline-monitoring instances because of the scarcity of bits available in the packet in the forwarding path.

  • Apply inline monitoring services only on a collector interface, that is, the interface on which the collector is reachable. You must not apply inline monitoring on IPFIX traffic as this generates another IPFIX packet for sampling thereby creating a loop. This includes inline monitoring service-generated traffic, such as template and record packet, option template and option record packet.

  • When inline monitoring service is enabled on aggregated Ethernet (AE) interfaces, the information element values are as follows:

    Table 2: Information Element Values for Aggregated Ethernet Interfaces

    Direction of inline monitoring service on AE interface

    Information element-10 (Incoming interface)

    Information element-14 (Outgoing interface)

    Ingress

    SNMP ID of AE

    0

    Egress

    SNMP ID of AE

    SNMP ID of member link

  • When inline monitoring service is enabled on IRB interfaces, the information element values are as follows:

    Table 3: Information Element Values for IRB Interfaces

    Direction of inline monitoring service on IRB interface

    Information element-10 (Incoming interface)

    Information element-14 (Outgoing interface)

    Ingress

    SNMP ID of IRB

    0

    Egress

    SNMP ID of IRB

    SNMP ID of vlan-bridge encapsulated interface

  • For XL-XM based devices (with Lookup chip (XL) and buffering ASIC (XM)), the length of the Data Link Frame Section information element in an exported packet can be shorter than the clip length even if the egress packet length is greater than clip length.

    The length of the Data Link Frame Section information element is reduced by 'N' number of bytes where 'N' = (ingress packet Layer 2 encapsulation length - egress packet Layer 2 encapsulation length).

    For instance, the Layer 2 encapsulation length for the ingress packet is greater than that of the egress packet when the ingress packet has MPLS labels and egress packet is of IPv4 or IPv6 type. When traffic flows from the provider edge (PE) device to the customer edge (CE) device, the ingress packet has VLAN tags and the egress packet is untagged.

    In such cases, the clip length can go past the last address location of the packet head, generating a PKT_HEAD_SIZE system log message. This can result in degradation of packet forwarding for the device.

  • In case of inline monitoring services in the ingress direction, the egressInterface (information element ID 14) does not report SNMP index of the output interface. This information element ID always reports value zero in case of ingress direction. The receiving collector process should identify the validity of this field based on the flowDirection (information element ID 61).

Supported and Unsupported Features with Inline Monitoring Services

Inline monitoring services supports:

  • Graceful Routing Engine switchover

  • In-service software upgrade (ISSU), nonstop software upgrade (NSSU), and nonstop active routing (NSR)

  • Ethernet interfaces and integrated routing and bridging (IRB) interfaces

  • Junos node slicing

Inline monitoring services currently does not support:

  • Ability to configure more than 16 inline-monitoring instances.

  • Junos Traffic Vision

  • Prior to Junos OS Release 21.1R1, the inline-monitoring-instance term action is supported only for inet and inet6 family firewall filters. Starting in Junos OS Release 21.1R1, it is supported for the any, bridge, ccc, mpls, and vpls family firewall filters.

  • IPv6 addressable collectors

  • Virtual platforms

  • Logical systems

Configuring Inline Monitoring Services

The inline monitoring services can monitor both IPv4 and IPv6 traffic on both ingress and egress directions. You can enable inline monitoring on MX Series routers with MPCs excluding MPC10E and MPC11E linecards.

SUMMARY You can configure inline monitoring services to monitor different streams of traffic at different sampling rates on the same logical unit of the interface. You can also export the original packet size to an collector along with information on the interface origin for effective troubleshooting.

Before You Configure

When you configure inline monitoring services, you can:

  • Configure up to 16 inline-monitoring instances. Under each instance, you can configure specific collector and template parameters.

  • Configure up to 4 IPv4-addressable collectors under each inline-monitoring instance. In total, you can configure up to 64 collectors. The collectors can be remote, and at different locations.

    For each collector, you can configure specific parameters, such as source, destination address, sampling rate, forwarding class, and so on. The default routing-instance name at the collector is default.inet.

  • Configure inet or inet6 family firewall filter with the term action inline-monitoring-instance inline-monitoring-instance-name. Starting in Junos OS Release 21.1R1, you can configure any, bridge, ccc, mpls,or vpls family firewall filters with the term action inline-monitoring-instance inline-monitoring-instance-name.

    Each term can support a different inline-monitoring instance.

  • Attach the inline monitoring firewall filter under the family of the logical unit of the interface.

After successfully committing the configuration, you can verify the implementation of the inline monitoring services by issuing the show services inline-monitoring statistics fpc-slot command from the device CLI.

Note:

If a packet requires inline monitoring services to be applied along with any of the traditional sampling technologies (such as, JFlow, SFlow, or port mirroring), the Packet Forwarding Engine performs both inline monitoring services and the traditional sampling technology on that packet.

Figure 6 is a sample illustration of inline monitoring services, where traffic is monitored at two different sampling rates on the device interface, and exported to four remote collectors in an IPFIX encapsulation format.

Figure 6: Inline Monitoring Services Inline Monitoring Services

In this example, the et-1/0/0 interface of the device is configured with inline monitoring services. The details of the configurations are as follows:

  • There are two inline-monitoring instances — Instance 1 and Instance 2.

  • There are four collectors, two collectors under each inline monitoring instance.

    • Instance 1 has Collector-1 and Collector-2.

    • Instance 2 has Collector-101 and Collector-102.

  • The collectors on Instance 1 have a sampling rate of 1:10000.

  • The collectors on Instance 2 have a sampling rate of 1:1.

  • Instance 1 collectors have a source and destination address of 1.1.1.1 and 2.2.2.1, respectively.

  • Instance 2 collectors have a source and destination address of 11.1.1.1 and 12.2.2.1, respectively.

  • The packets are exported to the collectors in an IPFIX encapsulated format.

To configure inline monitoring services:

  1. Define a firewall filter for each inline-monitoring instance for servicing the inline monitoring services. You can configure a family firewall filter with the term action inline-monitoring-instance.

    To define a firewall filter:

    In this example, Terms t1 and t2 are configured for Instance1 and Instance2, respectively.

  2. Enable inline monitoring services by configuring the associated template, instance, and collector parameters.
    1. To configure the inline monitoring services template:

      In this example, templates template-1 and template-2 are configured.

    2. To configure inline monitoring instance and collector parameters:

      In this example, Instance1 has two collectors, collector-1 and collector-2, and Instance2 has two collectors, collector-101 and collector-102. Different sampling rates have been configured for both the instances.

  3. Map the firewall filter under the family of the logical unit of the interface to apply inline monitoring in the ingress or egress direction.

    Alternatively, you can apply inline monitoring by mapping the firewall filter to a forwarding table filter with an input or output statement to filter ingress or egress packets, respectively.

    To attach the firewall filter:

    In this example, the inline monitoring filter is attached to family inet of unit 0 of et-1/0/0.

Configure Flow-Based Telemetry (EX4400 Series)

Starting in Junos OS Release 21.1R1, you can configure flow-based telemetry (FBT) for the EX4400 Series switches. FBT enables per-flow-level analytics, using inline monitoring services to create flows, collect them, and export them to a collector. A flow is a sequence of packets that have the same source IP, destination IP, source port, destination port, and protocol on an interface. For each flow, various parameters are collected and sent to a collector using the open standard IPFIX template to organize the flow. Once there is no active traffic for a flow, the flow is aged out after the configured inactive-timeout period (configure the flow-inactive-timeout statement at the [edit services inline-monitoring template template-name] hierarchy level).

Limitations:

  • IRB interfaces are not supported.
  • Only 8 inline-monitoring instances are supported.
  • You cannot configure an option template identifier or a forwarding class.
  • The IPFIX Option Data Record and IPFIX Option Data Template are not supported.
  • If you do not want the default values for the flow-export timer (10 seconds) and observation-domain identifier (1) , you must first configure the flow-export-timer and observation-domain-id statements at the [edit system packet-forwarding-options] hierarchy level, commit the configuration, and reboot the system.

You must get a subscription-based license to enable FBT. To check if you have a license for FBT, issue the show system license command in operational mode:

Before you can configure flow-based telemetry, if you do not want the default values for the flow-export timer (10 seconds) and observation-domain identifier (1) , you must first define the flow-export timer and the observation domain identifier, commit the configuration, and reboot the system. The software exports a IPFIX packet periodically at the configured flow-export timer interval. The observation domain identifier is used in the IPFIX packet to identify which line card sent the packet to the collector. Once set, the software derives a unique identifier for each line card based upon the system value set here. To configure:

In this example, the flow-export timer is set to 10 seconds and the observation domain identifier is set to 25:

The system then prompts you to reboot the system.

To configure flow-based telemetry:

  1. Define the IPFIX template.

    To configure attributes of the template:

    In this example, the inactive-flow timeout period is set to 10 seconds, the template refresh rate is set to 30 seconds, and you've configured a template identifier:

  2. Attach a template to the instance and describe the collector.

    To configure the instance and collector:

    In this example, you create a template with the name template_1, create an inline-monitoring instance i1, and create the configuration for the collector c2:

  3. Create a firewall filter and configure the action inline-monitoring-instance.

    To configure the firewall filter:

    In this example, you configure an IPv4 firewall filter named ipv4_ingress, with the term name rule1 containing the action inline-monitoring-instance, and the inline monitoring instance i1 is mapped to it:

  4. Map the firewall filter to the family under the logical unit of the already-configured interface to apply inline monitoring in the ingress direction.

    To map the firewall filter:

    In this example, you map the ipv4_ingress firewall filter to the inet family of logical interface 0 of the physical interface et-0/0/1:

  5. (Optional) Configure the sampling profile and rate, configure the profile for which counters to export to the collector, configure the flow rate and burst size, and enable security analytics for flow-based telemetry:

    To configure the flow-monitoring properties:

    In this example, the sampling profile is set to Random, the sampling rate is set to every 512 bytes, the counter profile is set to Per_flow_6_counters, the flow-rate is set to 100000 kbps, the burst-size is set to 2048 bytes, and security analytics are enabled: