Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Upgrade In-Place

To upgrade the Apstra server you need Apstra OS admin user privileges and Apstra admin user group permissions.

Pre-Upgrade Validation (In-Place)

  1. Confirm that you are upgrading to a supported version.
  2. Log into the Apstra server as admin (For example, if your Apstra server IP address were, the command would be ssh admin@
  3. Check that memory utilization is less than 50% to confirm that the VM has enough memory to hold two versions of the Apstra software at the same time. Check resources by running the command free -h.
  4. If utilization is > 50%, gracefully shutdown the Apstra server, add resources, then restart the Apstra server.
  5. Check that the server is active and has no issues by running the command service aos status.
  6. Review each blueprint to confirm that all Service Config has succeeded. If necessary, undeploy and remove devices from the blueprint to resolve any pending or failed service config.
  7. Review each blueprint for probe anomalies, and resolve them as much as possible. Take notes of any remaining anomalies.
  8. Refer to Qualified Device and NOS to verify that the devices' NOS versions are qualified on the new Apstra version. Upgrade or downgrade as needed, to one of the supported versions.
  9. Remove any device AAA configuration. During device upgrade, configured device agent credentials are required for SSH access.
  10. Remove any configlets used to configure firewalls. If you use FW's Routing Engine filters on devices, you'll need to update them to include the IP address of the new controller and worker VMs.
  11. Run an Apstra System "Check" job for all devices (Devices > Agents) and verify that all job states are SUCCESS.

    To upgrade device system agents, Apstra software must SSH to all devices using the configured credentials that were used when creating the device system agent. To verify SSH using these credentials, we recommend running an Apstra System "Check" job for all devices. If any check job fails, resolve the issue before proceeding with the Apstra upgrade.

  12. As root user, back up the Apstra server by running the command sudo aos_backup.
  13. Copy the backup files from /var/lib/aos/snapshot/<shapshot_name> to an external location.

Deploy New Apstra Server (In-Place)

  1. Download the Apstra installer package for your platform from Juniper Support Downloads ( for example) and transfer it to the Apstra server.
  2. Unzip the Apstra installer package.
  3. If you're using an Apstra cluster (off-box agents, IBA probes), download the installer package to the worker nodes as well. You'll upgrade the worker nodes in a later step.
  4. Log into the Apstra server as admin.
  5. Run the sudo bash aos_<aos_version>.run command, where <aos_version> is the version of the run file. For example, if the version is 4.0.1-1045 the command would be sudo bash as shown below.

    When you run this command, if any previous Apstra versions are detected, the script enters upgrade mode instead of new installation mode. The new Docker container installs next to the Docker containers from the previous version. The script imports the data from the previous version and migrates it to the Apstra SysDB on the new version.

    You’ll be shown a summary of configuration changes that will be pushed to devices as part of the upgrade

    As of Apstra version 4.0.1, the Apstra Upgrade Summary shows information separated by device roles (superspine, spine, leaf, leaf pair, and access switch for example). If an incremental config was applied instead of a full config, more details are displayed about the changes.

  6. After you've reviewed the summary, enter q to exit the summary. The AOS Upgrade: Interactive Menu appears where you can access additional information, and continue or quit the upgrade.

    The Apstra Reference Design in the new Apstra release may have changed in a way that invalidates configlets. To avoid unexpected outcomes, verify that your configlets don’t conflict with the newly rendered config. If you need to update your configlets, quit the upgrade, update your configlets, then run the upgrade again.

  7. If you want to continue with the upgrade after reviewing pending changes, enter c. The older Apstra version is deleted and the new Apstra version is activated on the server. When the upgrade is complete, you can check the version by navigating to Platform > About in the Apstra GUI.

    Upgrading the Apstra server is a disruptive process. When you upgrade in-place (same VM) and continue with the upgrade from this point, you cannot roll back the upgrade. The only way to return to the previous version is to reinstall a new VM with the previous version and restore the database from the backup that you previously made.

  8. If you want to stop the upgrade, enter q to abort the process. If you quit at this point and later decide to upgrade, you must start the process from the beginning.
  9. If you're using an Apstra cluster, the worker nodes disconnect from the Apstra controller and change to the FAILED state. This state means that off-box agents and the IBA probe containers that are on the worker nodes are not available; devices that are managed by the off-box agents do remain in service though. After you upgrade the agents in a later step, you'll upgrade the worker nodes in your Apstra cluster and the agents and/or probes will become available.

Change Operation Mode to Normal (In-place)

When you initiate an Apstra server upgrade, the operation mode changes from Normal to Maintenance automatically. After you've completed the upgrade you must manually change the mode back to Normal.

  1. From the left navigation menu in the Apstra GUI, navigate to Platform > Apstra Cluster > Cluster Management.
  2. Click the Change Operation Mode button, select Normal, then click Update. When you change the mode to Normal, any configured off-box agents are activated, but you must initiate the upgrade of any on-box agents (in the next section).

    You can also access the Cluster Management page from the lower left section of any page. You have continuous platform health visibility from here as well, based on the colors.


    This feature has been classified as a Juniper Apstra Technology Preview feature. These features are "as is" and voluntary use. Juniper Support will attempt to resolve any issues that customers experience when using these features and create bug reports on behalf of support cases. However, Juniper may not provide comprehensive support services to Tech Preview features.

    For additional information, refer to the Juniper Apstra Technology Previews page or contact Juniper Support.

    From the bottom of the left navigation menu, click one of the dots, then click Operation Mode to go to Cluster Management. Click the Change Operation Mode button, select Normal, then click Update.

    Because they're still in the process of upgrading, the agents won't be connected. When the upgrade has completed, the agents reconnect to the server and come back online. On the blueprint dashboard the Liveness anomalies for spine and leaf will also resolve.

Upgrade On-box Agents (In-Place)

Devices are still connected to the old Apstra server. (see Devices > Managed Devices). By upgrading the agents, the devices disconnect from the old Apstra server and connect to the new Apstra server.


When you initiate agent upgrade you cannot roll back to the previous version. The only way to return to the previous version is to reinstall a new VM with the previous version and restore the database from the backup that you previously made.

  1. Log into the Apstra GUI as user admin.
  2. From the left navigation menu, navigate to Devices > System Agents > Agents and select the device(s) to upgrade (up to 100 devices at a time as of Apstra version 4.0.1). You can upgrade multiple on-box agents at the same time, but the order of device upgrade is important.
    • Upgrade agents for superspines first.
    • Upgrade agents for spines second.
    • Upgrade agents for leafs third.
  3. Click the Install button to initiate the install process. The job state changes to IN PROGRESS. If agents are using a previous version of the Apstra software, they are automatically upgraded to the new version. Then they connect to the server and push any pending configuration changes to the devices. Telemetry also resumes, and the job states change to SUCCESS.
  4. In the Liveness section on the blueprint dashboard confirm that you don't have any device anomalies .

    If you need to roll back to the previous Apstra version after initiating agent upgrade, you must build a new VM with the previous Apstra version and restore the configuration to that VM. For assistance, contact Juniper Support.

Upgrade Worker Nodes (Apstra Cluster Only)

If you're using an Apstra cluster (for off-box agents and/or IBA probes), you need to upgrade the worker nodes as well as the controller node that you have already upgraded.

  1. If you didn't download the Apstra installer package to the worker nodes when you downloaded it to the Apstra server, do that now.
  2. From each Apstra worker node, run the sudo bash aos_<aos_version>.run command, where <aos_version> is the version of the run file. For example, if the version is 4.0.1-1045 the command would be sudo bash (no options). This is the same file you used to upgrade the controller. There are no prompts during the worker node upgrade.