Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Cluster Nodes

Nodes Overview

When a worker VM is added to the main Apstra controller VM, it registers with the Apstra server VM through sysdb, collects facts about the VM (such as core/memory/disk configuration and usage), and launches a container of the local VM. The Apstra controller VM reacts to REST API requests, configures the worker VM for joining or leaving the cluster, and keeps track of cluster-wide runtime information. It also reacts to container configuration entities and schedules them to the worker VM.

Apstra VM nodes include the following details:

Table 1: Apstra VM Nodes Parameters
Name Description
Address IP address or Fully-Qualified Domain Name (FQDN) of the VM
Name Apstra VM name, such as controller (the main Apstra controller node) or worker - iba (a worker node)
State ACTIVE, MISSING, or FAILED
Roles Controller or worker
Tags The controller node and any worker nodes that you add are tagged with iba and offbox, by default. If you delete one or both of these tags or delete a worker node with one or both of these tags, any IBA and/or off-box containers in that node automatically move to a VM with those tags. Make sure there is another node with the tag(s) you're deleting or the containers will be deleted when you delete the tag or node.
Capacity Score Calculated by Apstra software
CPU Number of CPUs
Errors As applicable. An example of an error is when an agent process has restarted because an agent has crashed.
Usage
  • Container Service Usage - Current VM container server usage (percentage)
  • Containers Count
  • Memory Usage (percentage)
  • CPU Usage (percentage)
  • Disk Usage - Current VM disk usage per logical volume (GB and percentage)
Containers The containers running on the node and the resources used by each container
Username/Password Apstra Server VM SSH username/password login credentials

From the left navigation menu, navigate to Platform > Apstra Cluster to go to Apstra nodes. Click a node address to see its details. You can create, clone, edit and delete Apstra nodes.

You have continuous visibility of platform health at the bottom left of every screen (GA in Apstra version 4.0.2 and Tech Preview in Apstra version 4.0.1). Green indicates the active state. Red indicates otherwise, such as a missing agent, the disk being in read only mode, or when an agent is rebooting (when agent has rebooted, the status returns to active). You can go directly to node details from any page by clicking one of the status indicators, then clicking the controller node or a worker node name.

The feature above is classified as GA in Apstra version 4.0.2 and Technology Preview in Apstra version 4.0.1.

Note:

This feature has been classified as a Juniper Apstra Technology Preview feature. These features are "as is" and for voluntary use. Juniper Support will attempt to resolve any issues that customers experience when using these features and create bug reports on behalf of support cases. However, Juniper may not provide comprehensive support services to Tech Preview features. For additional information, refer to Juniper Apstra Technology Previews or contact Juniper Support.

Create Apstra Node

  1. Install Apstra software on the VMs to be clustered, making sure they are all the same Apstra version as the main Apstra controller (which acts as the cluster manager). If they are not the same version, the controller will not accept them as part of the cluster.
  2. From the left navigation menu, navigate to Platform > Apstra Cluster and click Add Node.
  3. Enter a name, tags (optional), address (IP or FQDN), and Apstra Server VM SSH username/password login credentials. (iba and offbox tags are added by default.)
  4. Click Create. As the main Apstra controller connects to the new Apstra VM worker node, the state of the new Apstra VM changes from INIT to ACTIVE.

Edit Apstra Node

  1. Either from the list view (Platform > Apstra Cluster) or the details view, click the Edit button for the VM to edit.
  2. Make your changes. If you delete iba and/or offbox tags from the node, the IBA and/or off-box containers (as applicable) are moved to another node with those tags. Make sure the cluster has another node with those tags, or the containers will be deleted instead of moved.
    CAUTION:

    To prevent containers from being deleted, don't delete tags unless another node in the cluster has the same tags.

  3. Click Update to update the Apstra VM worker node.

Delete Apstra Node

When you delete a node that includes iba and/or offbox tags, the IBA and/or off-box containers (as applicable) are moved to another node with those tags. Make sure the cluster has another node with those tags, or the containers will be deleted instead of moved.
CAUTION:

To prevent containers from being deleted, don't delete nodes with iba and/or offbox tags unless another node in the cluster has the same tags.

  1. Either from the list view (Platform > Apstra Cluster) or the details view, click the Delete button for the Apstra VM to delete.
  2. Click Delete to delete the Apstra VM.