Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Flow-Based Telemetry (EX4100, EX4100-F, and EX4400 Series)

Flow based telemetry (FBT) enables per-flow-level analytics, using inline monitoring services to create flows, collect them, and export them to a collector using the open standard IPFIX template to organize the flow.

FBT Overview

You can configure flow-based telemetry (FBT) for the EX4100, EX4100-F, and EX4400 Series switches. FBT enables per-flow-level analytics, using inline monitoring services to create flows, collect them, and export them to a collector. With inline monitoring services, you can monitor every IPv4 and IPv6 packet on both ingress and egress directions of an interface. A flow is a sequence of packets that have the same source IP, destination IP, source port, destination port, and protocol on an interface. For each flow, the software collects various parameters and exports the actual packet up to the configured clip length to a collector using the open standard IPFIX template to organize the flow. Once there is no active traffic for a flow, the flow is aged out after the configured inactive-timeout period (configure the flow-inactive-timeout statement at the [edit services inline-monitoring template template-name] hierarchy level). The software exports a IPFIX packet periodically at the configured flow-export timer interval. The observation domain identifier is used in the IPFIX packet to identify which line card sent the packet to the collector. Once set, the software derives a unique identifier for each line card based upon the system value set here.

Benefits of FBT

With FBT, you can:

  • Count packet, TTL, and TCP window ranges
  • Track and count Denial of Service (DoS) attacks
  • Analyze the load distribution of ECMP groups/link aggregation groups (LAG) over the member IDs (EX4100 and EX4100-F only)
  • Track traffic congestion (EX4100 and EX4100-F only)
  • Gather information about multimedia flows (EX4100 and EX4100-F only)
  • Gather information on why packets are dropped (EX4100 and EX4100-F only)

FBT Flow Export Overview

See Figure 1 for a sample template, which shows the information element IDs, names, and sizes:

Figure 1: Sample FBT Information Element Template Sample FBT Information Element Template

Figure 2 shows the format of a sample IPFIX data template for FBT:

Figure 2: Sample FBT IPFIX Data Template Sample FBT IPFIX Data Template

Figure 3 shows the format of a sample exported IPFIX flow for FBT:

Figure 3: Sample Exported IPFIX Flow for FBT Sample Exported IPFIX Flow for FBT
Table 1: Element Mapping
Element Enterprise Element ID Description

TIMESTAMP_FLOWSTART_VAL

1

Indicates the timestamp at which the TCP flow collection was started.

TIMESTAMP_FLOWEND_VAL

2

Indicates the timestamp at which the TCP flow collection was ended.

TIMESTAMP_NEW_LEARN_VAL

3

Timestamp when a new flow is learned in the flow table.

PKT_RANGE_CNTR1_VAL

4

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR2_VAL

5

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR3_VAL

6

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR4_VAL

7

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR5_VAL

8

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR6_VAL

9

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR7_VAL

10

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

PKT_RANGE_CNTR8_VAL

11

Provides the number of packets in different size categories. User can opt for 4 categories or 6 categories under the template. The system categorises the packets into different size buckets accordingly and count. (counter-profile feature)

MIN_PKT_LENGTH_VAL

12

Provides the number of packets which have a size above the defined size. The configurable size range is between 64 and 9000 bytes.

MAX_PKT_LENGTH_VAL

13

Provides the number of packets which have a size below the defined size. The configurable size range is between 64 and 9000 bytes.

TCP_WINDOW_RANGE_CNTR_VAL

15

Counts the packets within the specified TCP window range.

DOS_ATTACK_ID_VAL

16

Reports the DDOS attack vector.

TTL_RANGE1_CNTR_VAL

17

Provides the number of packets within specific TTL value range.

TTL_RANGE2_CNTR_VAL

18

Provides the number of packets within specific TTL value range.

DOS_ATTACK_PKT_CNTR_VAL

19

Number of DDOS attack packets.

CUSTOM_PKT_RANGE_START_VAL

20

Provides the number of packets within the configured size range. You can define the size range between 64 and 9000 bytes by configuring the counter-profile statement at the [edit services inline-monitoring] hierarchy level. For example: set services inline-monitoring counter-profile c1 counter p1 counter-type packet-range min-value 1000 max-value 1500.

CUSTOM_TTL_RANGE_START_VAL

30

Provides the number of packets within the configured TTL range. You can define the TTL range between 0 and 255 by configuring the counter-profile statement at the [edit services inline-monitoring] hierarchy level. For example: set services inline-monitoring counter-profile c1 counter p1 counter-type ttl-range min-value 10 max-value 15.

CUSTOM_TCP_WINDOW_RANGE_START_VAL

40

Provides the number of packets within the configured TCP window range. You can define the TCP window range between 0 and 65535 by configuring the counter-profile statement at the [edit services inline-monitoring] hierarchy level. For example: set services inline-monitoring counter-profile c1 counter p1 counter-type tcp-window-range min-value 100 max-value 5000.

INTER_ARRIVAL_TIME

50

The time difference between two consecutive packets at ingress (per flow).

INTER_DEPARTURE_TIME

51

The time difference between two consecutive packets at egress (per flow).

CHIP_DELAY

52

The amount of time the packet takes to transit the ASIC.

SHARED_POOL_CONGESTION

53

Shared pool congestion level

QUEUE_CONGESTION_LEVEL

54

Queue congestion level

INGRESS_DROP_REASON

55

The reason the packet is dropped at ingress.

INGRESS_DROP_REASON_PKT_CNTR_VAL

56

Number of packets dropped at ingress.

EGRESS_DROP_REASON

57

The reason the packet is dropped at egress.

EGRESS_DROP_REASON_PKT_CNTR_VAL

58

Number of packets dropped at egress.

AGGREGATE_INTF_MEMBER_ID

59

ID for a member of a link aggregation group (LAG) or an equal-cost multipath (ECMP) group

AGGREGATE_INTF_GROUP_ID

60

ID for a link aggregation group (LAG)

MMU_QUEUE_ID

61

Indicates the queue ID to which the packet belongs.

UNKNOWN_ID_VAL

254

Not applicable to customer. Internal to Juniper.

RESERVED_ID_VAL

255

Not applicable to customer. Internal to Juniper.

When you create a new inline monitoring services configuration or change an existing one, the software immediately sends the periodic flow export of the data template to the respective collectors, instead of waiting until the next scheduled send time.

Limitations and Caveats

  • IRB interfaces are supported. Starting in Junos OS Release 25.2R1 L2 firewall filters are supported.
  • Only 8 inline-monitoring instances and 8 collectors per instance are supported.
  • Flow records are limited to 128 bytes in length.
  • The collector must be reachable through either the loopback interface or a network interface, not only through a management interface.
  • You can configure a collector only within the same routing instance as the data. You cannot configure a collector within a different routing instance.

  • You cannot configure an option template identifier or a forwarding class.
  • The IPFIX Option Data Record and IPFIX Option Data Template are not supported.
  • Feature profiles are not supported on EX4400 switches.
  • If you make any changes to the feature-profile configuration, you must reboot the device.
  • (EX4100 and EX4100-F only) If you configure any of the congestion or egress features in the feature profile for an inline-monitoring instance, you cannot configure a counter profile for a template in that instance.
  • (EX4100 and EX4100-F only) Because the congestion and egress features collect a lot of data, you can only configure 4 or 5 of these features per inline-monitoring instance.
  • (EX4100 and EX4100-F only) For multicast flow tracking, one ingress copy can produce multiple egress copies. All copies may update the same entry. Therefore, you can track the aggregate results of all copies of the same multicast flow.

Licenses

You must get a permanent license to enable FBT. To check if you have a license for FBT, issue the show system license command in operational mode:

For the EX4100 and EX4100-F switches, you need license S-EX4100-FBT-P. For the EX4400 switches, you need license S-EX-FBT-P.

Drop Vectors (EX4100 and EX4100-F only)

FBT can report more than 100 drop reasons. Drop vectors are very large vectors, too large to be reasonably accommodated in a flow record. Therefore, the software groups and compresses the drop vectors into a 16-bit compressed drop vector, and then passes that drop vector to the flow table. The 16-bit compressed drop vector corresponds to a particular drop vector group. Table 2 and Table 3 describe how drop vectors are grouped together to form a particular 16-bit compressed drop vector.

Table 2: Ingress Drop Vector Groups (EX4100 and EX4100-F only)
Group ID Drop Reason
1

MMU drop

2

TCAM, PVLAN

3

DoS attack or LAG loopback fail

4

Invalid VLAN ID, invalid TPID, or the port is not in the VLAN

5

Spanning Tree Protocol (STP) forwarding, bridge protocol data unit (BPDU), Protocol, CML

6

Source route, L2 source discard, L2 destination discard, L3 disable, and so on.

7

L3 TTL, L3 Header, L2 Header, L3 source lookup miss, L3 destination lookup miss

8

ECMP resolution, storm control, ingress multicast, ingress next-hop error

Table 3: Egress Drop Vector Groups (EX4100 and EX4100-F only)
Group ID Drop Reason
1

MMU unicast traffic

2

MMU weighted random early detection (WRED) unicast traffic

3

MMU RQE

4

MMU multicast traffic

5

Egress TTL, stgblock

6

Egress field processor drops

7

IPMC drops

8

Egress quality of service (QoS) control drops

Configure FBT (EX4100, EX4100-F, and EX4400 Series)

FBT enables per-flow-level analytics, using inline monitoring services to create flows, collect them, and export them to a collector. A flow is a sequence of packets that have the same source IP, destination IP, source port, destination port, and protocol on an interface. For each flow, various parameters are collected and sent to a collector using the open standard IPFIX template to organize the flow. Once there is no active traffic for a flow, the flow is aged out after the configured inactive-timeout period (configure the flow-inactive-timeout statement at the [edit services inline-monitoring template template-name] hierarchy level). The software exports a IPFIX packet periodically at the configured flow-export timer interval. The observation domain identifier is used in the IPFIX packet to identify which line card sent the packet to the collector. Once set, the software derives a unique identifier for each line card based upon the system value set here.

To configure flow-based telemetry:

  1. Define the IPFIX template.

    To configure attributes of the template:

    In this example, the inactive-flow timeout period is set to 10 seconds, the observation domain ID is set to 25, the template refresh rate is set to 30 seconds, and you've configured a template identifier

  2. Attach a template to the instance and describe the collector.

    To configure the instance and collector:

    In this example, you create a template with the name template_1, create an inline-monitoring instance i1, and create the configuration for the collector c2:

  3. Create a firewall filter and configure the action inline-monitoring-instance.

    To configure the firewall filter:

    In this example, you configure an IPv4 firewall filter named ipv4_ingress, with the term name rule1 containing the action inline-monitoring-instance, and the inline monitoring instance i1 is mapped to it:

  4. Map the firewall filter to the family under the logical unit of the already-configured interface to apply inline monitoring in the ingress direction.

    To map the firewall filter:

    In this example, you map the ipv4_ingress firewall filter to the inet family of logical interface 0 of the physical interface et-0/0/1:

  5. (Optional) Configure the sampling profile and rate, configure the profile for which counters to export to the collector, configure the flow rate and burst size, and enable security analytics for flow-based telemetry:

    To configure the flow-monitoring properties:

    In this example, the sampling profile is set to Random, the sampling rate is set to every 512 bytes, the counter profile is set to Per_flow_6_counters, the flow-rate is set to 100000 kbps, the burst-size is set to 2048 bytes, and security analytics are enabled:

  6. (Optional, EX4100 and EX4100-F switches only) Configure a feature profile to collect more data about packets as they move through the switch.

    For example, you could monitor congestion or collect information about why packets are being dropped. You can enable security analytics either here or in the previous step. To configure a feature profile:

    You must reboot the system for the feature profile to take effect. Because the aggregate interface distribution monitoring, congestion, and egress features collect a lot of data, you can only configure 4 or 5 of these features per inline-monitoring instance. The statements that configure these features are:

    • aggregate-intf-member-id

    • egress-drop-reason

    • inter-departure-time

    • queue-congestion-level

    • shared-pool-congestion

    After you commit the configuration and reboot the system, use the show services inline-monitoring feature-profile-mapping fpc-slot slot-number command to verify that the features have been successfully configured.

  7. After committing the configuration, monitor inline-monitoring statistics with the show services inline-monitoring statistics fpc-slot slot-number command.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
22.2R1
You can now configure flow-based telemetry (FBT) for the EX4100 and EX4100-F Series switches, and configure additional items to track for a flow using the feature-profile name features statement at the [edit inline-monitoring] hierarchy level.
21.1R1
You can configure flow-based telemetry (FBT) for the EX4400 Series switches. FBT enables per-flow-level analytics, using inline monitoring services to create flows, collect them, and export them to a collector.