Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Apstra Cluster Nodes

Nodes Overview

When a worker VM is added to the main Apstra controller VM, it registers with the Apstra server VM through sysdb, collects facts about the VM (such as core/memory/disk configuration and usage), and launches a container of the local VM. The Apstra controller VM reacts to REST API requests, configures the worker VM for joining or leaving the cluster, and keeps track of cluster-wide runtime information. It also reacts to container configuration entities and schedules them to the worker VM.

Apstra VM nodes include the following details:

Table 1: Apstra VM Nodes Parameters
Name Description
Address IP address or Fully-Qualified Domain Name (FQDN) of the VM
Name Apstra VM name, such as controller (the main Apstra controller node) or worker - iba (a worker node)
State ACTIVE, MISSING, or FAILED
Roles Controller or worker
Tags The controller node and any worker nodes that you add are tagged with iba and offbox, by default. If you delete one or both of these tags or delete a worker node with one or both of these tags, any IBA and/or offbox containers in that node automatically move to a VM with those tags. Make sure there is another node with the tag(s) you’re deleting or the containers will be deleted when you delete the tag or node.
Capacity Score Apstra software calculates the score.
Containers Count Number of containers
CPU Number of CPUs
Errors As applicable. An example of an error is when an agent process has restarted because an agent has crashed.
Usage
  • Memory Usage (percentage)
  • CPU Usage (percentage)
  • Disk Usage - Current VM disk usage per logical volume (GB and percentage)
  • Container Service Usage - derived from the required resources and the size of the container. For example, if an offbox agent that needs 250 MB is running in a 500MB worker node, the container service usage is 50%. (An IBA container may require 1GB.) A controller node begins at 50% usage because it includes its own processing agents that perform controller-specific processing logic.
Containers The containers running on the node and the resources that each container uses
Username/Password Apstra Server VM SSH username/password login credentials

From the left navigation menu, navigate to Platform > Apstra Cluster to go to Apstra nodes. Click a node address to see its details. You can create, clone, edit and delete Apstra nodes.

At the bottom left section of every page, you have continuous visibility of platform health. Green indicates the active state. Red indicates an issue, such as missing agent, the disk being in read only mode, or an agent rebooting (after the agent has rebooted, the status returns to active). If IBA Services or Offbox Agents is green, all containers are launched. If one of them is red, at least one container has failed. From any page, click one of the dots, then click a section for details. Clicking Controller, IBA Services, and Offbox Agents all take you to Nodes details.

Create Apstra Node

  1. Install Apstra software on the VMs to cluster, making sure they are all the same Apstra version as the main Apstra controller (which acts as the cluster manager). If they are not the same version, the controller will not accept them as part of the cluster.
  2. From the left navigation menu, navigate to Platform > Apstra Cluster and click Add Node.
  3. Enter a name, tags (optional), address (IP or FQDN), and Apstra Server VM SSH username/password login credentials. (iba and offbox tags are added by default.)
  4. Click Create. As the main Apstra controller connects to the new Apstra VM worker node, the state of the new Apstra VM changes from INIT to ACTIVE.

Edit Apstra Node

  1. Either from the list view (Platform > Apstra Cluster) or the details view, click the Edit button for the VM to edit.
  2. Make your changes. If you delete iba and/or offbox tags from the node, the IBA and/or offbox containers (as applicable) are moved to another node with those tags. Make sure the cluster has another node with those tags, or the containers will be deleted instead of moved.
    CAUTION:

    To prevent containers from being deleted, don’t delete tags unless another node in the cluster has the same tags.

  3. Click Update to update the Apstra VM worker node.

Delete Apstra Node

When you delete a node that includes iba and/or offbox tags, the IBA and/or offbox containers (as applicable) are moved to another node with those tags. Make sure the cluster has another node with those tags, or the containers will be deleted instead of moved.

CAUTION:

To prevent containers from being deleted, don’t delete nodes with iba and/or offbox tags unless another node in the cluster has the same tags.

  1. Either from the list view (Platform > Apstra Cluster) or the details view, click the Delete button for the Apstra VM to delete.
  2. Click Delete to delete the Apstra VM.