ON THIS PAGE
Network Analytics
This section describes the network analytics feature that provides visibility into the performance and behavior of the data center infrastructure. It collects data from the switch, analyzes the data by using sophisticated algorithms, and captures the results in reports. Network administrators can use the reports to troubleshoot problems, make decisions, and adjust resources as needed.
Network Analytics Overview
The analytics manager (analyticsm) in the Packet Forwarding Engine collects traffic and queue statistics, and the analytics daemon (analyticsd) in the Routing Engine analyzes the data and generates reports.
Analytics Feature Overview
You enable network analytics by configuring queue (microburst) monitoring and high-frequency traffic statistics monitoring.
Queue (microburst) monitoring:
You use microburst monitoring to look at traffic queue conditions on the network. A microburst occurrence indicates to the Packet Forwarding Engine that a user-specified queue depth or latency threshold is reached. The queue depth is the buffer (in bytes) containing the data, and latency is the time (in nanoseconds or microseconds) the data stays in the queue.
You can configure queue monitoring based on either queue depth or latency (but not both), and configure the frequency (polling interval) at which the Packet Forwarding Engine checks for microbursts and sends the data to the Routing Engine for processing. You may configure queue monitoring globally for all physical interfaces on the system, or for a specific interface on the switch. However, the specified queue monitoring interval applies either to all interfaces, or none; you cannot configure the interval for each interface.
High-frequency traffic statistics monitoring:
You use high-frequency traffic statistics monitoring to collect traffic statistics at specified polling intervals. Similar to the queue monitoring interval, the traffic monitoring interval applies either to all interfaces, or none; you cannot configure the interval for each interface.
Both traffic and queue monitoring are disabled by default. You must configure each type of monitoring using the CLI. In each case, the configuration for an interface always takes precedence over the global configuration.
You can configure traffic and queue monitoring for physical interfaces only; logical interfaces and Virtual Chassis port (VCP) interfaces are not supported.
The analyticsd daemon in the Routing Engine generates local log files containing queue and traffic statistics records. You can specify the log filename and size, and the number of log files. If you do not configure a filename, the data is not saved.
You can display the local log file or specify a server to receive the streaming data containing the queue and traffic statistics.
For each port, information for the last 10 records of traffic
statistics and 100 records of queue statistics is cached. You may
view this information by using the show analytics commands.
To store traceoptions data, you configure the traceoptions statement at the [edit services analytics] hierarchy
level.
Network Analytics Enhancements Overview
The network analytics feature provides the following enhancements:
Resources—Consist of interfaces and system. The interfaces resource allows you to configure an interface name and an associated resource profile name for each interface. With the system resource, you can configure the polling intervals for queue monitoring and traffic monitoring, and an associated resource profile for the system.
Resource profile—A template that contains the configurations for queue and traffic monitoring, such as depth threshold and latency threshold values, and whether each type of monitoring is enabled or disabled. Once a resource profile is configured, you apply it to a system or interfaces resource.
Collector—A server for collecting queue and traffic monitoring statistics, and can be a local or remote server. You can configure a local server to store monitoring statistics in a log file, or a remote server to receive streamed statistics data.
Export profile—You must configure an export profile if you wish to send streaming data to a remote collector. In the export profile, you define the category of streamed data (system-wide or interface-specific) to determine stream type the collector will receive. You can specify both system and interface stream categories. System data includes system information and status of queue and traffic monitoring. Interface-specific data includes interface information, queue and traffic statistics, and link, queue, and traffic status.
-
Google Protocol Buffer (GBP) stream format—A new streaming format for monitoring statistics data that is sent to a remote collector in a single AnRecord message. The format of this stream which provides nine types of information is shown in Table 1.
Table 1: Google Protocol Buffer (GBP) stream format Message
Description
System information
General system information, including boot time, model information, serial number, number of ports, and so on
System queue status
Queue status for the system in general
System traffic status
Traffic status for the system in general
Interface information
Includes SNMP index, slot, port, and other information
Queue statistics for interfaces
Queue statistics for specific interfaces
Traffic statistics for interfaces
Traffic statistics for specific interfaces
Link status for interfaces
Includes link speed, state, and so on
Queue status for interfaces
Queue status for specific interfaces
Traffic status for interfaces
Traffic status for specific interfaces
-
The analytics.proto file—Provides a template for the GBP stream format. This file can be used for writing your analytics server application. To download the file, go to: /documentation/en_US/junos13.2/topics/reference/proto-files/analytics-proto.txt
Use of threshold values—The Analytics Manager (analyticsm) will generate a queue statistics record when the lower queue depth or latency threshold value is exceeded.
User Datagram Protocol (UDP)—Additional transport protocol you can configure, in addition to Transmission Control Protocol (TCP), for the remote streaming server port.
Single file for local logging—Replaces the separate log files for queue and traffic statistics.
Change in latency measurement—Configuration and reporting of latency values have changed from microseconds to nanoseconds.
Change in reporting of the collection time in UTC format—Statistics collection time is reported in microseconds instead of milliseconds.
New operational mode command
show analytics collector—Replaces theshow analytics streaming-servercommand.Changes in command output format—Include the following changes:
Addition of unicast, multicast, and broadcast packet counters in queue and traffic statistics.
Reversal of the sequence of statistics information in the output. The most recent record is displayed at the beginning, and the oldest record at the end of the output.
Removal of traffic or queue monitoring status information from the global portion of the
show analytics configurationandshow analytics statuscommand output if there is no global configuration.Addition of
n/ato the interface-specific portion of theshow analytics configurationandshow analytics statuscommand output if a parameter is not configured (for example, depth threshold or latency threshold).
Summary of CLI Changes
Enhancements to the network analytics feature result in changes in the CLI when you configure the feature. See Table 2 for a summary of CLI changes.
|
Task |
CLI for Junos OS Release 13.2X51-D15 and later |
|---|---|
|
Configuring global queue and traffic monitoring polling interval |
resource {
system {
polling-interval {
queue-monitoring interval;
traffic-monitoring interval;
}
}
}
|
|
Configuring local files for traffic and queue statistics reporting |
collector {
local {
file filename {
files number;
size size;
}
}
}
|
|
Enabling queue statistics and traffic monitoring, and specifying the depth threshold for all interfaces (globally) |
Requires defining a resource profile and applying it to the system:
|
|
Enabling queue statistics and traffic monitoring, and specifying the latency threshold for one interface |
Requires defining a resource profile and applying it to the interface:
|
|
Configuring the streaming data format (JSON, CSV, or TSV) to send to a remote server Note:
Junos OS added support for the GPB stream format and configuration of the transport protocols (TCP or UDP). |
Requires defining the stream format in an export profile and applying the profile to the collector.
|
|
Configuring the streaming message types (queue or traffic statistics) to send to a remote server |
Requires defining an export profile and applying it to the collector:
|
|
Configuring the transport protocol for sending streaming data to an external server |
Configuration is available. Both TCP and UDP protocols are supported, and can be configured for the same port.
collector {
address ip-address {
port number1 {
transport tcp;
transport udp;
}
port number2 {
transport udp;
}
}
}
|
|
Show information about remote streaming server or collector |
Issue the |
Understand Network Analytics Streaming Data
Network analytics monitoring data can be streamed to remote servers called collectors. You can configure one or more collectors to receive streamed data containing queue and traffic statistics. This topic describes the streamed data output.
Network analytics provide support for the following streaming data formats and output:
-
JavaScript Object Notation (JSON)
-
Comma-separated Values (CSV)
-
Tab-separated Values (TSV)
For the output shown in this topic for JSON, CSV, and TSV formats, the time is displayed in the Unix epoch format (also known as Unix time or POSIX time).
Network analytics provide support for the below streaming format and the output that is added along with JSON, CSV, and TSV formats.
-
Google Protocol Buffer (GPB)
- JavaScript Object Notation (JSON)
- Comma-separated Values (CSV)
- Tab-separated Values (TSV)
- Google Protocol Buffer (GPB)
JavaScript Object Notation (JSON)
The JavaScript Object Notation (JSON) streaming format supports the following data:
Queue statistics data. For example:
{"record-type":"queue-stats","time":1383453988263,"router-id":"qfx5100-switch", "port":"xe-0/0/18","latency":0,"queue-depth":208}See Table 3 for more information about queue statistics output fields.
Traffic statistics. For example:
{"record-type":"traffic-stats","time":1383453986763,"router-id":"qfx5100-switch", "port":"xe-0/0/16","rxpkt":26524223621,"rxpps":8399588,"rxbyte":3395100629632, "rxbps":423997832,"rxdrop":0,"rxerr":0,"txpkt":795746503,"txpps":0,"txbyte":101855533467, "txbps":0,"txdrop":0,"txerr":0}See Table 4 for more information about traffic statistics output fields.
Comma-separated Values (CSV)
The Comma-separated Values (CSV) streaming format supports the following data:
Queue statistics. For example:
q,1383454067604,qfx5100-switch,xe-0/0/18,0,208
See Table 3 for more information about queue statistics output fields.
Traffic statistics. For example:
t,1383454072924,qfx5100-switch,xe-0/0/19,1274299748,82950,163110341556,85603312,0,0, 27254178291,8300088,3488534810679,600002408,27268587050,3490379142400
See Table 4 for more information about traffic statistics output fields.
Tab-separated Values (TSV)
The Tab-separated Values (TSV) streaming format supports the following data:
Queue statistics. For example:
q 585870192561703872 qfx5100-switch xe-0/0/18 (null) 208 2
See Table 3 for more information about queue statistics output fields.
Traffic statistics. For example:
t 1383454139025 qfx5100-switch xe-0/0/19 1279874033 82022 163823850036 84801488 0 0 27811618258 8199630 3559887126455 919998736 27827356915 3561901685120
See Table 4 for more information about traffic statistics output fields.
Queue Statistics Output for JSON, CSV, and TSV
Table 3 describes the output fields for streamed queue statistics data in the order they appear.
|
Field |
Description |
|---|---|
|
record-type |
Type of statistics. Displayed as:
|
|
time |
Time (in Unix epoch format) at which the statistics were captured. |
|
router-id |
ID of the network analytics host device. |
|
port |
Name of the physical port configured for network analytics. |
|
latency |
Traffic queue latency in milliseconds. |
|
queue depth |
Depth of the traffic queue in bytes. |
Traffic Statistics Output for JSON, CSV, and TSV
Table 4 describes the output fields for streamed traffic statistics data in the order they appear.
|
Field |
Description |
|---|---|
|
record-type |
Type of statistics. Displayed as:
|
|
time |
Time (in Unix epoch format) at which the statistics were captured. |
|
router-id |
ID of the network analytics host device. |
|
port |
Name of the physical port configured for network analytics. |
|
rxpkt |
Total packets received. |
|
rxpps |
Total packets received per second. |
|
rxbyte |
Total bytes received. |
|
rxbps |
Total bytes received per second. |
|
rxdrop |
Total incoming packets dropped. |
|
rxerr |
Total packets with errors. |
|
txpkt |
Total packets transmitted. |
|
txpps |
Total packets transmitted per second. |
|
txbyte |
Total bytes transmitted. |
|
txbps |
Total bytes transmitted per second. |
|
txdrop |
Total transmitted bytes dropped. |
|
txerr |
Total transmitted packets with errors (dropped). |
Google Protocol Buffer (GPB)
This streaming format provides:
-
Support for nine types of messages, based on resource type (system-wide or interface-specific).
-
Sends messages in a hierarchical format.
-
You can generate other stream format messages (JSON, CSV, TSV) from GPB formatted messages.
-
Includes a 8-byte message header. See Table 5 for more information.
Table 5 describes the GPB stream format message header.
|
Byte Position |
Field |
|---|---|
|
0 to 3 |
Length of message |
|
4 |
Message version |
|
5 to 7 |
Reserved for future use |
The following GPB prototype file (analytics.proto) provides details about the streamed data:
package analytics;
// Traffic statistics related info
message TrafficStatus {
optional uint32 status = 1;
optional uint32 poll_interval = 2;
}
// Queue statistics related info
message QueueStatus {
optional uint32 status = 1;
optional uint32 poll_interval = 2;
optional uint64 lt_high = 3;
optional uint64 lt_low = 4;
optional uint64 dt_high = 5;
optional uint64 dt_low = 6;
}
message LinkStatus {
optional uint64 speed = 1;
optional uint32 duplex = 2;
optional uint32 mtu = 3;
optional bool state = 4;
optional bool auto_negotiation= 5;
}
message InterfaceInfo {
optional uint32 snmp_index = 1;
optional uint32 index = 2;
optional uint32 slot = 3;
optional uint32 port = 4;
optional uint32 media_type = 5;
optional uint32 capability = 6;
optional uint32 porttype = 7;
}
message InterfaceStatus {
optional LinkStatus link = 1;
optional QueueStatus queue_status = 2;
optional TrafficStatus traffic_status = 3;
}
message QueueStats {
optional uint64 timestamp = 1;
optional uint64 queue_depth = 2;
optional uint64 latency = 3;
}
message TrafficStats {
optional uint64 timestamp = 1;
optional uint64 rxpkt = 2;
optional uint64 rxucpkt = 3;
optional uint64 rxmcpkt = 4;
optional uint64 rxbcpkt = 5;
optional uint64 rxpps = 6;
optional uint64 rxbyte = 7;
optional uint64 rxbps = 8;
optional uint64 rxcrcerr = 9;
optional uint64 rxdroppkt = 10;
optional uint64 txpkt = 11;
optional uint64 txucpkt = 12;
optional uint64 txmcpkt = 13;
optional uint64 txbcpkt = 14;
optional uint64 txpps = 15;
optional uint64 txbyte = 16;
optional uint64 txbps = 17;
optional uint64 txcrcerr = 18;
optional uint64 txdroppkt = 19;
}
message InterfaceStats {
optional TrafficStats traffic_stats = 1;
optional QueueStats queue_stats = 2;
}
//Interface message
message Interface {
required string name = 1;
optional bool deleted = 2;
optional InterfaceInfo information = 3;
optional InterfaceStats stats = 4;
optional InterfaceStatus status = 5;
}
message SystemInfo {
optional uint64 boot_time = 1;
optional string model_info = 2;
optional string serial_no = 3;
optional uint32 max_ports = 4;
optional string collector = 5;
repeated string interface_list = 6;
}
message SystemStatus {
optional QueueStatus queue_status = 1;
optional TrafficStatus traffic_status = 2;
}
//System message
message System {
required string name = 1;
optional bool deleted = 2;
optional SystemInfo information = 3;
optional SystemStatus status = 4;
}
message AnRecord {
optional uint64 timestamp = 1;
optional System system = 2;
repeated Interface interface = 3;
}
See Also
Understand Enhanced Analytics Local File Output
The network analytics feature provides visibility into the performance and behavior of the data center infrastructure. You enable network analytics by configuring queue or traffic statistics monitoring, or both. In addition, you can configure a local file for storing the traffic and queue statistics records.
The traffic and queue monitoring statistics can be stored locally in a single file. The following
example shows the output from the monitor start command.
root@qfx5100-33> monitor start an root@qfx5100-33> *** an *** q,1393947567698432,qfx5100-33,xe-0/0/19,1098572,1373216 q,1393947568702418,qfx5100-33,xe-0/0/19,1094912,1368640 q,1393947569703415,qfx5100-33,xe-0/0/19,1103065,1378832 t,1393947569874528,qfx5100-33,xe-0/0/16,12603371884,12603371884,0,0, 8426023,1613231610488,8628248712,0,3,5916761,5916761,0,0,0,757345408,0,0,0 t,1393947569874528,qfx5100-33,xe-0/0/18,12601953614,12601953614,0,0, 8446737,1613050071660,8649421552,0,5,131761619,131761619,0,0,84468, 16865487232,86495888,0,0 t,1393947569874528,qfx5100-33,xe-0/0/19,126009250,126009250,0,0,84469, 16129184128,86496392,0,0,12584980342,12584980342,0,0,8446866,1610877487744, 8649588432,12593703960,0 q,1393947575698402,qfx5100-33,xe-0/0/19,1102233,1377792 q,1393947576701398,qfx5100-33,xe-0/0/19,1107724,1384656
See Table 6 for queue statistics output, and Table 7 for traffic statistics output. The fields in the tables are listed in the order they appear in the output example.
Field |
Description |
Example in Output |
|---|---|---|
Record type |
Type of statistics (queue or traffic monitoring) |
|
Time (microseconds) |
Unix epoch (or Unix time) in microseconds at which the statistics were captured. |
|
Router ID |
ID of the network analytics host device. |
|
Port |
Name of the physical port configured for network analytics. |
|
Latency (nanoseconds) |
Traffic queue latency in nanoseconds. |
|
Queue depth (bytes) |
Depth of the traffic queue in bytes. |
|
Field |
Description |
Example in Output |
|---|---|---|
Record type |
Type of statistics (queue or traffic monitoring) |
|
Time (microseconds) |
Unix epoch (or Unix time) in microseconds at which the statistics were captured. |
|
Router ID |
ID of the network analytics host device. |
|
Port |
Name of the physical port configured for network analytics. |
|
rxpkt |
Total packets received. |
|
rxucpkt |
Total unicast packets received. |
|
rxmcpkt |
Total multicast packets received. |
|
rxbcpkt |
Total broadcast packets received. |
|
rxpps |
Total packets received per second. |
|
rxbyte |
Total octets received. |
|
rxbps |
Total bytes received per second. |
|
rxdroppkt |
Total incoming packets dropped. |
|
rxcrcerr |
CRC/Align errors received. |
|
txpkt |
Total packets transmitted. |
|
txucpkt |
Total unicast packets transmitted. |
|
txmcpkt |
Total multicast packets transmitted. |
|
txbcpkt |
Total broadcast packets transmitted. |
|
txpps |
Total packets transmitted per second. |
|
txbyte |
Total octets transmitted. |
|
txbps |
Bytes per second transmitted. |
|
txdroppkt |
Total transmitted packets dropped. |
|
txcrcerr |
CRC/Align errors transmitted. |
|
Understand Network Analytics Configuration and Status
The network analytics feature provides visibility into the performance and behavior of the data center infrastructure. You can enable network analytics by configuring traffic and queue statistics monitoring.
If you had enabled traffic or queue monitoring, you can issue
the show analytics configuration and show analytics
status commands to view the global interface configuration and
status and that of specific interfaces. The output that is displayed
depends on your configuration at the global interface and specific
interface levels. For example:
A global interface configuration (for all interfaces) to disable monitoring supersedes the configuration to enable it on an interface.
The interface configuration to enable or disable monitoring supersedes the global interface configuration, unless monitoring had been disabled globally for all interfaces.
If there is no configuration, whether for all interfaces or a specific interface, monitoring is disabled by default (see Table 8).
Table 8 describes the correlation between the user configuration and the settings that are displayed.
User Configuration |
Global or System Settings |
Specific Interface Settings |
||
|---|---|---|---|---|
Configuration |
Status |
Configuration |
Status |
|
No global or specific interface configuration. This is the default setting. |
Auto |
Auto |
Auto |
Disabled |
No global interface configuration but the specific interface monitoring is disabled. |
Auto |
Auto |
Disabled |
Disabled |
No global interface configuration but the specific interface monitoring is enabled. |
Auto |
Auto |
Enabled |
Enabled |
Monitoring is disabled globally and there is no interface configuration. |
Disabled |
Disabled |
Auto |
Disabled |
Monitoring is disabled at both the global and specific interface levels. |
Disabled |
Disabled |
Disabled |
Disabled |
Monitoring is disabled at the global interface level but is enabled at the specific interface level. The global interface Disabled setting supersedes the Enabled setting for a specific interface. |
Disabled |
Disabled |
Enabled |
Disabled |
Monitoring is enabled for all interfaces but there is no configuration for the specific interface . |
Enabled |
Enabled |
Auto |
Enabled |
Monitoring is enabled at both the global and specific interface levels. |
Enabled |
Enabled |
Enabled |
Enabled |
Monitoring is enabled for all interfaces but is disabled for the specific interface. |
Enabled |
Enabled |
Disabled |
Disabled |
See Also
Configure Queue and Traffic Monitoring
Network analytics queue and traffic monitoring provides visibility into the performance and behavior of the data center infrastructure. This feature collects data from the switch, analyzes the data using sophisticated algorithms, and captures the results in reports. You can use the reports to help troubleshoot problems, make decisions, and adjust resources as needed.
You enable queue and traffic monitoring by first defining a resource profile template, and then applying the profile to the system (for a global configuration) or to individual interfaces.
You can configure queue and traffic monitoring on physical network interfaces only; logical interfaces and Virtual Chassis physical (VCP) interfaces are not supported.
The procedure to configure queue and traffic monitoring on a switch requires Junos OS Release 13.2X51-D15 or later to be installed on your device.
To configure queue monitoring on a switch:
Configure the queue monitoring polling interval (in milliseconds) globally (for the system):
[edit] set services analytics resource system polling-interval queue-monitoring interval
Configure a resource profile for the system, and enable queue monitoring:
[edit] set services analytics resource-profiles profile-name queue-monitoring
Configure high and low values of the depth-threshold (in bytes) for queue monitoring in the system profile:
[edit] set services analytics resource-profiles profile-name depth-threshold high number low number
For both high and low values, the range is from 1 to 1,250,000,000 bytes, and the default value is 0 bytes.
Note:You can configure either the depth-threshold or latency threshold for the system, but not both.
Apply the resource profile template to the system for a global configuration:
[edit] set services analytics resource system resource-profile profile-name
Configure an interface-specific resource profile and enable queue monitoring for the interface:
[edit] set services analytics resource-profiles profile-name queue-monitoring
Configure the latency-threshold (high and low values) for queue monitoring in the interface-specific profile:
[edit] set services analytics resource-profiles profile-name latency-threshold high number low number
For both high and low values, the range is from 1 to 100,000,000 nanoseconds, and the default value is 1,000,000 nanoseconds.
Note:You can configure either the depth-threshold or latency threshold for interfaces, but not both.
Apply the resource profile template for interfaces to one or more interfaces:
[edit] set services analytics resource interfaces interface-name resource-profile profile-name
Note:If a conflict arises between the system and interface configurations, the interface-specific configuration supersedes the global (system) configuration.
To configure traffic monitoring on a switch:
Configure the traffic monitoring polling interval (in seconds) for the system:
[edit] set services analytics resource system polling-interval traffic-monitoring interval
Configure a resource profile for the system, and enable traffic monitoring in the profile:
[edit] set services analytics resource-profiles profile-name traffic-monitoring
Apply the resource profile to the system for a global configuration:
[edit] set services analytics resource system resource-profile profile-name
Configure a resource profile for interfaces, and enable traffic monitoring in the profile:
[edit] set services analytics resource-profiles profile-name traffic-monitoring
Note:If a conflict arises between the system and interface configurations, the interface-specific configuration supersedes the global (system) configuration.
Apply the resource profile template to one or more interfaces:
[edit] set services analytics resource interfaces interface-name resource-profile profile-name
Configure a Local File for Network Analytics Data
The network analytics feature provides visibility into the performance and behavior of the data center infrastructure. This feature collects data from the switch, analyzes the data using sophisticated algorithms, and captures the results in reports. Network administrators can use the reports to help troubleshoot problems, make decisions, and adjust resources as needed.
To save the queue and traffic statistics data in a local file, you must configure a filename to store it.
The procedure to configure a local file for storing queue and traffic monitoring statistics requires Junos OS Release 13.2X51-D15 or later to be installed on your device.
To configure a local file for storing queue and traffic monitoring statistics:
Configure a Remote Collector for Streaming Analytics Data
The network analytics feature provides visibility into the performance and behavior of the data center infrastructure. This feature collects data from the switch, analyzes the data using sophisticated algorithms, and captures the results in reports. Network administrators can use the reports to help troubleshoot problems, make decisions, and adjust resources as needed.
You can configure an export profile to define the stream format and type of data, and one or more remote servers (collectors) to receive streaming network analytics data.
The procedure to configure a collector for receiving streamed analytics data requires Junos OS Release 13.2X51-D15 or later to be installed on your device.
To configure a collector for receiving streamed analytics data:
Example: Configure Queue and Traffic Monitoring
This example shows how to configure the enhanced network analytics feature, including queue and traffic monitoring.
Requirements
This example uses the following hardware and software components:
A QFX5100 standalone switch
A external streaming server to collect data
Junos OS Release 13.2X51-D15 software
TCP server software (for remote streaming servers)
Before you configure network analytics, be sure you have:
Junos OS Release 13.2X51-D15 or later software installed and running on the QFX5100 switch.
(Optional for streaming servers for the JSON, CSV, and TSV formats) TCP or UDP server software set up for processing records separated by a newline character (\n) on the remote streaming server.
(Optional for streaming servers for the GPB format) TCP or UDP build streaming server using the analytics.proto file.
All other network devices running.
Overview
The network analytics feature provides visibility into the performance and behavior of the data center infrastructure. This feature collects data from the switch, analyzes the data using sophisticated algorithms, and captures the results in reports. Network administrators can use the reports to help troubleshoot problems, make decisions, and adjust resources as needed.
You enable network analytics by first defining a resource profile template, and then applying the profile to the system (for a global configuration) or to individual interfaces.
Disabling of the queue or traffic monitoring supersedes the
configuration (enabling) of this feature. You disable monitoring by
applying a resource profile that includes the no-queue-monitoring or no-traffic-monitoring configuration statement at the [edit services analytics resource-profiles] hierarchy level.
Topology
In this example, the QFX5100 switch is connected to an external server used for streaming statistics data.
Configuration
To configure the network analytics features, perform these tasks:
- CLI Quick Configuration
- Configure the Polling Interval for Queue and Traffic Monitoring
- Configure a Local Statistics File
- Configure and Apply a Resource Profile for the System
- Configure and Apply a Resource Profile for an Interface
- Configure an Export Profile and Collector for Streaming Data
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them in a text file, remove any line breaks,
change any details necessary to match your network configuration,
and then copy and paste the commands into the CLI at the [edit] hierarchy level.
[edit] set services analytics resource system polling-interval queue-monitoring 1000 set services analytics resource system polling-interval traffic-monitoring 5 set services analytics collector local file an.stats set services analytics collector local file an files 3 set services analytics collector local file an size 10m set services analytics resource-profiles sys-rp queue-monitoring set services analytics resource-profiles sys-rp traffic-monitoring set services analytics resource-profiles sys-rp depth-threshold high 999999 low 99 set services analytics resource system resource-profile sys-rp set services analytics resource-profiles if-rp queue-monitoring set services analytics resource-profiles if-rp traffic-monitoring set services analytics resource-profiles if-rp latency-threshold high 2300 low 20 set services analytics resource interfaces xe-0/0/16 resource-profile if-rp set services analytics resource interfaces xe-0/0/18 resource-profile if-rp set services analytics resource interfaces xe-0/0/19 resource-profile if-rp set services analytics export-profiles ep stream-format gpb set services analytics export-profiles ep interface information set services analytics export-profiles ep interface statistics queue set services analytics export-profiles ep interface statistics traffic set services analytics export-profiles ep interface status link set services analytics export-profiles ep system information set services analytics export-profiles ep system status queue set services analytics export-profiles ep system status traffic set services analytics collector address 10.94.198.11 port 50001 transport tcp export-profile ep set services analytics collector address 10.94.184.25 port 50013 transport udp export-profile ep
Configure the Polling Interval for Queue and Traffic Monitoring
Step-by-Step Procedure
To configure the polling interval queue and traffic monitoring globally:
Configure the queue monitoring polling interval (in milliseconds) for the system:
[edit] set services analytics resource system polling-interval queue-monitoring 1000
Configure the traffic monitoring polling interval (in seconds) for the system:
[edit] set services analytics resource system polling-interval traffic-monitoring 5
Configure a Local Statistics File
Step-by-Step Procedure
To configure a file for local statistics collection:
Configure the filename:
[edit] set services analytics collector local file an.stats
Configure the number of files:
[edit] set services analytics collector local file an files 3
Configure the file size:
[edit] set services analytics collector local file an size 10m
Configure and Apply a Resource Profile for the System
Step-by-Step Procedure
To define a resource profile template for queue and traffic monitoring resources:
Configure a resource profile and enable queue monitoring:
[edit] set services analytics resource-profiles sys-rp queue-monitoring
Enable traffic monitoring in the profile:
[edit] set services analytics resource-profiles sys-rp traffic-monitoring
Configure the depth-threshold (high and low values) for queue monitoring in the profile:
[edit] set services analytics resource-profiles sys-rp depth-threshold high 999999 low 99
Apply the resource profile template to the system resource type for a global configuration:
[edit] set services analytics resource system resource-profile sys-rp
Configure and Apply a Resource Profile for an Interface
Step-by-Step Procedure
You can configure queue and traffic monitoring for one or more specific interfaces. The interface-specific configuration supersedes the global (system) configuration. To define a resource profile template for queue and traffic monitoring resources for an interface:
Configure a resource profile and enable queue monitoring:
[edit] set services analytics resource-profiles if-rp queue-monitoring
Enable traffic monitoring in the profile:
[edit] set services analytics resource-profiles if-rp traffic-monitoring
Configure the latency-threshold (high and low values) for queue monitoring in the profile:
[edit] set services analytics resource-profiles if-rp latency-threshold high 2300 low 20
Apply the resource profile template to the interfaces resource type for specific interfaces:
[edit] set services analytics resource interfaces xe-0/0/16 resource-profile if-rp set services analytics resource interfaces xe-0/0/18 resource-profile if-rp set services analytics resource interfaces xe-0/0/19 resource-profile if-rp
Configure an Export Profile and Collector for Streaming Data
Step-by-Step Procedure
To configure a collector (streaming server) for receiving monitoring data:
Create an export profile and specify the stream format:
[edit] set services analytics export-profiles ep stream-format gpb
Configure the export profile to include interface information:
[edit] set services analytics export-profiles ep interface information
Configure the export profile to include interface queue statistics:
[edit] set services analytics export-profiles ep interface statistics queue
Configure the export profile to include interface traffic statistics:
[edit] set services analytics export-profiles ep interface statistics traffic
Configure the export profile to include interface status link information:
[edit] set services analytics export-profiles ep interface status link
Configure the export profile to include system information:
[edit] set services analytics export-profiles ep system information
Configure the export profile to include system queue status:
[edit] set services analytics export-profiles ep system status queue
Configure the export profile to include system traffic status:
[edit] set services analytics export-profiles ep system status traffic
Configure the transport protocol for the collector addresses and apply an export profile:
[edit] set services analytics collector address 10.94.198.11 port 50001 transport tcp export-profile ep set services analytics collector address 10.94.184.25 port 50013 transport udp export-profile ep
Note:If you configure the
tcporudpoption for the JSON, CSV, and TSV formats, you must also set up the TCP or UDP client software on the remote collector to process records that are separated by the newline character (\n) on the remote server.If you configure the
tcporudpoption for the GPB format, you must also set up the TCP or UDP build streaming server using the analytics.proto file.
Results
Display the results of the configuration:
[edit services analytics]
user@switch# run show configuration
services {
analytics {
export-profiles {
ep {
stream-format gpb;
interface {
information;
statistics {
traffic;
queue;
}
status {
link;
}
}
system {
information;
status {
traffic;
queue;
}
}
}
}
resource-profiles {
sys-rp {
queue-monitoring;
traffic-monitoring;
depth-threshold high 99999 low 99;
}
if-rp {
queue-monitoring;
traffic-monitoring;
latency-threshold high 2300 low 20;
}
}
resource {
system {
resource-profile sys-rp;
polling-interval {
traffic-monitoring 5;
queue-monitoring 1000;
}
}
interfaces {
xe-0/0/16 {
resource-profile if-rp;
}
xe-0/0/18 {
resource-profile if-rp;
}
xe-0/0/19 {
resource-profile if-rp;
}
}
}
collector {
local {
file an size 10m files 3;
}
address 10.94.184.25 {
port 50013 {
transport udp {
export-profile ep;
}
}
}
address 10.94.198.11 {
port 50001 {
transport tcp {
export-profile ep;
}
}
}
}
}
}
Verification
Confirm that the configuration is correct and works as expected by performing these tasks:
- Verify the Network Analytics Configuration
- Verify the Network Analytics Status
- Verify the Collector Configuration
Verify the Network Analytics Configuration
Purpose
Verify the configuration for network analytics.
Action
From operational mode, enter the show analytics
configuration command to display the traffic and queue monitoring
configuration.
user@host> show analytics configuration
Traffic monitoring status is enabled
Traffic monitoring polling interval : 5 seconds
Queue monitoring status is enabled
Queue monitoring polling interval : 1000 milliseconds
Queue depth high threshold : 99999 bytes
Queue depth low threshold : 99 bytes
Interface Traffic Queue Queue depth Latency
Statistics Statistics threshold threshold
High Low High Low
(bytes) (nanoseconds)
xe-0/0/16 enabled enabled n/a n/a 2300 20
xe-0/0/18 enabled enabled n/a n/a 2300 20
xe-0/0/19 enabled enabled n/a n/a 2300 20Meaning
The output displays the traffic and queue monitoring configuration information on the switch.
Verify the Network Analytics Status
Purpose
Verify the network analytics operational status of the switch.
Action
From operational mode, enter the show analytics
status global command to display global traffic and queue monitoring
status.
user@host> show analytics status global Traffic monitoring status is enabled Traffic monitoring pollng interval : 5 seconds Queue monitoring status is enabled Queue monitoring polling interval : 1000 milliseconds Queue depth high threshold : 99999 bytes Queue depth low threshold : 99 bytes
From operational mode, enter the show analytics
status command to display both the interface and global queue
monitoring status.
user@host> show analytics status
Traffic monitoring status is enabled
Traffic monitoring pollng interval : 5 seconds
Queue monitoring status is enabled
Queue monitoring polling interval : 1000 milliseconds
Queue depth high threshold : 99999 bytes
Queue depth low threshold : 99 bytes
Interface Traffic Queue Queue depth Latency
Statistics Statistics threshold threshold
High Low High Low
(bytes) (nanoseconds)
xe-0/0/16 enabled enabled n/a n/a 2300 20
xe-0/0/18 enabled enabled n/a n/a 2300 20
xe-0/0/19 enabled enabled n/a n/a 2300 20
Meaning
The output displays the global and interface status of traffic and queue monitoring on the switch.
Verify the Collector Configuration
Purpose
Action
Verify the configuration for the collector for streamed data is working.
From operational mode, enter the show analytics
collector command to display the streaming servers configuration.
user@host> show analytics collector Address Port Transport Stream format State Sent 10.94.184.25 50013 udp gpb n/a 484 10.94.198.11 50001 tcp gpb In progress 0
Meaning
The output displays the collector configuration.
The connection state of a port configured with the udp transport protocol is always displayed as n/a.