Probes

IBA Probes Overview

Probes are the basic unit of abstraction in Intent-Based Analytics. Generally, a given probe consumes some set of data from the network, does various successive aggregations and calculations on it, and optionally specifies some conditions of said aggregations and calculations on which anomalies are raised.

Probes are Directed Acyclic Graphs (DAGs) where the nodes of the graph are processors and stages. Stages are data, associated with context, that can be inspected by the operator. Processors are sets of operations that produce and reduce output data from input data. The input to processors are one-or-many stages, and the output from processors are also one-or-many stages. The directionality of the edges in a probe DAG represent this input-to-output flow.

Importantly, the initial processors in a probe are special and do not have any input stage. They are notionally generators of data. We shall refer to these as source processors.

IBA works by ingesting raw telemetry from collectors into probes to extract knowledge (ex: anomalies, aggregations etc.). A given collector publishes telemetry as a collection of metrics, where each metric has identity (viz, set of key-value pairs) and a value. IBA probes, often with the use of graph queries, must fully specify the identity of a metric to ingest its value into the probe. With this feature, probes can ingest metrics with partial specification of identity using ingestion filters, thus enabling ingestion of metrics with unknown identities.

Some probes are created automatically. These probes will not be deleted automatically. This keeps things simple operationally and implementation-wise.

Processors

The input processors of a probe handle the required configuration to ingest raw telemetry into the probe to kickstart the data processing pipeline. For these processors, the number of stage output items (one or many) is equal to the number of results in the specified graph query(s). If multiple graph queries are specified, e.g. graph_query: [A, B], and query A matches 5 nodes and query B matches 10 nodes, results of query A will be accessible using query_result indices from 0 to 4, and results of query B using indices from 5 to 14.

If a processor’s input type and/or output type is not specified, then the processor takes a single input called in, and produces a single output called out.

Some processor fields are called expressions. In some cases, they are graph queries and are so noted. In other cases, they are Python expressions that yield a value. For example, in the Accumulate processor, duration may be specified as integer with seconds, e.g. 900, or as an expression, e.g. 60 * 15. However, expressions could be more useful: there are multiple ways to parametrize them.

Expressions support string values. Processor configuration parameters that are strings and support expressions should use special quoting when specifying static value. For example, state: "up" is not valid because it’ll refer to the variable “up”, not a static string, so it should be: state: '"up"'

An expression is always associated with a graph query and is run for every resulting match of that query. The execution context of the expression is such that every variable specified in the query resolves to a named node in the associated match result. See the example of Service Data Collector for more information.

Ingestion Filters

With “ingestion filters” one query result can ingest multiple metrics into a probe. Table data types are used to store multiple metrics as part of a single stage output item. Table data types include table_ns, table_dss, table_ts - to correspond to existing types - ns, dss, ts - respectively.

IBA Collection Filter

Collection filters determine the metrics that are collected from target devices.

A collection filter for a given collector on a given device, is simply a collection of ingestion filters present in different probes. You can also specify it as part of enabling a service outside the context of IBA or probes but existing precedence rules for service enablement apply here - only filters at a given precedence level are aggregated. When multiple probes specify an ingestion filter targeting a specific service on a specific device, the metrics collected are a union - in other words, a metric is published when it matches any of the filters. This is why, the data is also filtered by the controller component prior to ingesting into the IBA probes.

This filter is evaluated by telemetry collectors, often to better control even what subset of available metrics is fetched from the underlying device operating system. For example, to fetch only a subset of routes instead of getting all routes which can be a huge number. In any case, only the metrics matching the collection filter are published as raw telemetry.

As part of enabling a service on a device, you can now specify collection filters for services. This filter becomes an additional input provided to collectors as part of “self.service_config.collection_filters”.

IBA Filter Format

Following are the design/usability goals for filters (ingestion and collection)

  1. Ease of authoring - given probe authors are the ones specifying it
    • Most often cases are match any, match against a given list of possible values, equality match, range check if key has numeric values.
  2. Efficient evaluation - given the filters are evaluated in the hot paths of collection or ingestion.
  3. Aggregatable - multiple filters are aggregated so this aggregation logic need not become the responsibility of individual collectors.
  4. Programming language neutral - components operating on filters can be in Python or C++ or some other language in the future.
  5. Programmable - be amenable to future programmability around the filters, by the controller itself and/or collectors, to enhance things like usability, performance etc..

Considering the above goals, following is a suggested and illustrative schema for filter1. Refer to ingestion filter sections for specific examples to understand this better.

FILTER_SCHEMA = s.Dict(s.Object(
  'type': s.Enum(['any', 'equals', 'list', 'pattern', 'range', 'prefix']),
  'value': s.OneOf({
    'equals': s.OneOf([s.String(), s.Integer()]),
    'list': s.List(s.String(), validate=s.Length(min=1)),
    'pattern': s.List(s.String(), validate=s.Length(min=1)),
    'range': s.AnomalyRange(), validate=s.Length(min=1),
    'prefix': s.Object({
      'prefixsubnet': s.Ipv6orIpv4NetworkAddress(),
      'ge_mask': s.Optional(s.Integer()),
      'le_mask': s.Optional(s.Integer()),
      'eq_mask': s.Optional(s.Integer())
  })
), key_type=s.String(description=
  'Name of the key in metric identity. Missing metric identity keys are '
  'assumed to match any value'))

One instance of filter specification is interpreted as AND of all specified keys (aka per-key constraints). Multiple filter specifications coming from multiple probes are considered as OR at the filter level.

Note

The schema presented here is only for communicating the requirements and engineering is free to choose any way that accomplishes stated use cases.

Collector Processors additional_properties specified in collector processors’ configuration can be accessed using the special context. namespace. E.g. if a collector defines property system_role, it could be used this way:

duration: 60 * (15 if context.system_role == "leaf" else 10)

Note

Items context is available as long as the items set is unchanged from the original set derived from the collector processor configuration. After data goes through a processor that changes this set, e.g. any grouping processor, it’s no longer available.

From the blueprint, navigate to Analytics / Probes.

_images/analytics_probes_330.png

Importing Probe

  1. From the blueprint, navigate to Analytics > Probes, then click Create Probe and select Import Probes from the drop-down list.
  2. Either click Choose Files and navigate to the file(s) on your computer, or drag and drop the file(s) from your computer into the dialog window.
  3. Click Import to import the probe and return to the list view.

Exporting Probe

  1. From the blueprint, navigate to Analytics > Probes, then click the name of the probe to export.
  2. Click the Export button (top-right) to see a preview of the file that will be exported.
  3. To copy the contents, click Copy, then paste it.
  4. To download the json file to your local computer, click Save as File.
  5. When you’ve copied and/or downloaded the file, click the X to close the dialog.

Instantiating Predefined Probe

The Two stage L3 Clos reference design comes with a set of predefined probes that can be instantiated via the web interface or via the facade API at /predefined_probes (Platform > Developers > Two stage L3 Clos). For the exact input and output parameters necessary for these probes, please refer to the API documentation.

  1. From the blueprint, navigate to Analytics > Probes, then click Create Probe and select Instantiate Predefined Probe from the drop-down list.
  2. Select a predefined probe from the drop-down list. For more information about some of the predefined probes, see the links below.
  3. Configure the probe to suit your anomaly detection requirements.
  4. Click Create to instantiate the probe and return to the list view.

Creating Probe

  1. From the blueprint, navigate to Analytics > Probes, then click Create Probe and select New Probe.
  2. Enter a name and (optional) description.
  3. To be able to filter by your own defined categories, you can enter tag(s).
  4. Probes are enabled by default. This means that data is collected and processed (potentially creating anomalies) as soon as the probe is created. To disable the probe, toggle off Enabled. When you are ready to start collecting and processing data, you can edit the probe to enable it.
  5. Click Add Processor, select a processor type, then click Add to add the processor to the probe. For more information about individual processors, see the links below.
  6. Customize inputs and properties as appropriate, or leave defaults as is.
  7. Repeat the previous two steps until you’ve added all required processors for the new probe.
  8. Click Create to create the probe and return to the list view.

Editing Probe

When you edit probes that are referenced in widgets you’ll get the message ‘Some widget(s) are currently using this probe. Editing this probe will affect those widget(s) and related dashboard(s)’.

  1. From the list view (Analytics > Probes) or the details view, click the Edit button for the probe to edit.
  2. Make your changes.
  3. Click Update to stage the changes and return to the list view.

Deleting Probe

A probe that is used by a widget cannot be deleted.

  1. From the list view (Analytics > Probes) or the details view, click the Delete button for the probe to delete.
  2. Click Delete Probe to stage the deletion and return to the list view.