Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Examples: Monitors

This section assumes that Test Agents (as many as are required by the monitors) have been created according to the section Creating and Deploying a New Test Agent.

Overview of Monitor Orchestration

Before you can create and start a monitor through the REST API, you must have a template on which to base the monitor defined in Control Center, as explained in the chapter Test and Monitor Templates. All parameters specified in that template as Template input then need to be assigned values when you create a monitor from it through the REST API.

Creating a Monitor

Suppose that two templates have been set up in Control Center: one for UDP monitoring between two Test Agent interfaces, and another where a Test Agent acts as TWAMP initiator towards a TWAMP reflector.

Below is Python code for listing the monitor templates in an account through the REST API:

The output will look something like this (below, two monitor templates have been defined):

If you want to inspect just a single template, you can do so as follows, indicating the template's ID:

Continuing the previous example, if you run this code it will produce the output below:

Now suppose you want to create a monitor based on the TWAMP template. This is done using the POST operation for monitors. By default, the monitor will also start immediately as a result of this operation (started defaults to true). Alternatively, you can set started to False and use a separate operation to start up the monitor: see the section Starting and Stopping a Monitor.

You need to provide values for the parameters under inputs, which are left to be defined at runtime. The parameter names are those defined as Variable name in Control Center. Here, they are simply lowercase versions of the Control Center display names ("senders" vs. "Senders", etc.).

Below is code supplying the required parameter settings for the monitor. For a monitor template with a different set of inputs, the details of this procedure will of course differ.

In this example, no alarm is associated with the monitor. For examples involving alarms, go to the section Creating a Monitor with an Alarm.

Some comments on the senders input value are helpful here to explain how the input is structured.

This input value has input_type = interface_list, so in its value you need to provide a list of Test Agent interfaces. In the example above, a list of two interfaces is passed. For each interface we need to specify the Test Agent ID, the Test Agent interface, and the IP version to use.

Note that IPv6 is supported only for certain task types (as detailed in the support documentation), so ip_version = 6 is a valid setting only for those tasks.

Creating a Monitor with an Alarm

To associate an alarm with a monitor, you can either point to an alarm template that has been defined, or you can supply the entire alarm configuration with the POST operation. We will give one example of each approach below.

Setting Up a Monitor Alarm by Pointing to an Alarm Template

In order to make use of an alarm template, you must know its ID. To this end, first retrieve all alarm templates as described in the section Retrieving All Alarm Templates and note the id value of the relevant template. Suppose this ID is "3". You can then refer to that template as follows:

Supply monitor input values here as in the previous example.

(Some optional parameters are omitted here.)

Setting Up a Monitor Alarm by Configuring It Directly

Alternatively, you can set up an alarm for a monitor by supplying its entire configuration when creating the monitor, without referring to an alarm template. This is done as shown in the following example.

Supply monitor input values here.

Starting and Stopping a Monitor

If the monitor was not configured to start at creation time (started set to False), you need to apply a PUT or PATCH operation to start it (the two operations are equivalent). Below, the PATCH operation is shown.

The monitor is now started:

To stop the monitor, use the same operation but with started set to False:

Retrieving SLA Status and Data Metrics for a Monitor

Here is how to retrieve the SLA status and comprehensive data metrics for a monitor. This operation also fetches the complete configuration of the monitor.

By default, the SLA status is returned for each of the following time intervals: last 15 minutes, last hour, and last 24 hours. You can specify a different time interval, replacing the default ones, by including the start and end parameters in a query string at the end of the URL. The time is given in UTC (ISO 8601) as specified in IETF RFC 3339. An example is given below.

This operation can also return detailed data metrics for each task performed by the monitor. You turn on this feature by setting with_detailed_metrics to true (by default, this flag is set to false). The detailed data metrics are found under tasks > streams > metrics and are given for successive time intervals whose length are determined by the resolution parameter. The default and maximum resolution is 10 seconds. The resolution value entered is converted into one of the resolutions available: 10 seconds, 1 minute, 5 minutes, 20 minutes, or 4 hours.

Averaged metrics are returned by default. You can turn these off by setting with_metrics_avg to false in the query string. Average metrics are by default computed for the last 15 minutes and are found in tasks > streams > metrics_avg. If you specify a different time interval by start and end, averaged metrics will be returned for that interval instead.

The output also includes monitor logs.

Example (with default resolution 10 seconds for detailed data metrics):

The output will be similar to the following:

Here is how to specify "start" and "end" times and the time resolution of detailed data metrics in a query string:

You can also retrieve all monitors with their SLAs in one go. However, in this case, no detailed data metrics are included in the export (the tasks > streams item is omitted). This is to limit the size of the output if the number of monitors is large.

Generating a PDF Report on a Monitor

You can generate a PDF report on a monitor directly from the REST API. The report has the same format as that generated from the Control Center GUI.

By default, the report covers the last 15 minutes. You can specify a different time interval by including the start and end parameters in a query string at the end of the URL. The time is given in UTC (ISO 8601) as specified in IETF RFC 3339.

In addition, the following options can be included in the query string:

  • worst_num: For each task in a monitor, you can specify how many measurement results to show, ranked by the number of errored seconds with the worst on top. The scope of a measurement result is task-dependent; to give one example, for HTTP it is the result obtained for one client. The default number is 30.
  • graphs: Include graphs in the report.

Example: