Upgrading Apstra Server¶
For new installations with new blueprints, only version 3.3.0 is available.
Note
In 3.3.0 minor releases, upgrade is supported from 3.3.0.1 to 3.3.0.2.
Supported Upgrade Paths¶
Warning
“.0” major feature release versions (e.g. 3.3.0, 3.2.0, 3.1.0) are for new installations ONLY. You CANNOT UPGRADE to major feature release versions (starting with version 3.1.0). Supported upgrades include maintenance release versions “.1” and later (e.g. 3.1.1, 3.1.2, etc.). See below for supported upgrade path details.
To 3.2.4-65, the following upgrade paths are supported:
- 3.2.3-46 to 3.2.4-65 (All)
- 3.2.2.4-9 to 3.2.4-65 (All)
- 3.2.2.3-3 to 3.2.4-65 (All)
- 3.2.2.2-3 to 3.2.4-65 (All)
- 3.2.2.1-2 to 3.2.4-65 (All)
- 3.2.2-12 to 3.2.4-65 (All)
- 3.2.1-298 to 3.2.4-65 (All)
- 3.2.0-242 to 3.2.4-65 (All)
- 3.1.1-179 to 3.2.4-65 (All)
To 3.2.3-46, the following upgrade paths are supported:
- 3.2.2-12 to 3.2.3-46 (All)
- 3.2.1-298 to 3.2.3-46 (All)
- 3.2.0.2-4 to 3.2.3-46 (All)
- 3.2.0.1-2 to 3.2.3-46 (All)
- 3.2.0-242 to 3.2.3-46 (All)
- 3.1.1-179 to 3.2.3-46 (All)
To 3.2.2-12, the following upgrade paths are supported:
- 3.2.1-298 to 3.2.2-12 (All)
- 3.2.0.2-4 to 3.2.2-12 (All)
- 3.2.0.1-2 to 3.2.2-12 (All)
- 3.2.0-242 to 3.2.2-12 (All)
- 3.1.1-179 to 3.2.2-12 (All)
To 3.2.1-298, the following upgrade paths are supported:
- 3.2.0-242 to 3.2.1-298 (All)
- 3.2.0.1-2 to 3.2.1-298 (All)
- 3.1.1-179 to 3.2.1-298 (All)
To 3.1.1-179, the following upgrade paths are supported:
- 2.3.1-129 (or higher) to 3.1.1-179 (All)
- 3.0.0-151 (or higher) to 3.1.1-179 (All)
- 3.0.1-96 to 3.1.1-179 (All)
- 3.0.2-133 to 3.1.1-179 (All)
- 3.1.0-206 to 3.1.1-179 (All)
Note
A number of Linux kernel and Nginx security updates were added to version 3.1.1. In order to receive Ubuntu Linux OS and Nginx updates, you must perform a VM to VM upgrade.
To 3.0.1-96, the following upgrade paths are supported:
- 2.2.1-166 to 3.0.1-96 (VM to VM)
- 2.3.0-181 to 3.0.1-96 (All)
- 2.3.1-129 to 3.0.1 (All)
- 2.3.2-130 to 3.0.1-96 (All)
For information about additional upgrade paths that may be supported, contact Support.
Known Limitations and Issues¶
- In versions 3.2.x and earlier versions, the Apstra server VMs deployed from the same image share the same SSH keys. Due to the way the VM is packaged, all instances installed from the same OVA/QCOW image have the same SSH host keys. As a result, an attacker can more easily bypass remote host verification when a user connects by SSH to what is believed to be a previously used Apstra server host but is really the attacker’s host performing a man-in-the-middle attack. To update the SSH Host Keys, see Updating SSH Host Keys.
- To versions 3.2.x, as NCLU syntax is no longer used in Cumulus Interface type configlets since 3.2.0, when upgrading from versions 3.1.1 or earlier, Cumulus Interface type configlets must be modified in Linux command style to exactly match /etc/network/interfaces syntax.
- To version 3.1.1, due to AOS bug (AOS-15213), the device system agent upgrade process for Cumulus Linux devices will cause the device switchd process to restart temporarily disrupting traffic on the device. Please contact Support for a maintenance version of version 3.1.1 which will support upgrades of networks with Cumulus Linux devices without disruption.
- To version 3.1.1, because of a known AOS bug (AOS-14681), AOS health check on the Apstra server fails and indicates that RCI (Root Cause Identification)-related agents have terminated after the upgrade. To prevent this from happening, disable RCI before upgrading. Please allow 10 minutes after disabling RCI before starting the upgrade. After the upgrade is complete, the user can re-enable RCI.
- To version 3.1.1, a number a Linux kernel and security updates have been added to the Ubuntu 18.04 base OS in version 3.1.1. To receive Ubuntu Linux OS updates, you must do a VM to VM upgrade.
- To version 3.1.1, AOS bug (AOS-14708) is fixed where users using macOS Catalina and Google Chrome, users cannot accept default self-signed HTTPS/SSL certificate provided with Google Chrome. However, upgrading users must do a VM to VM upgrade or generate a new self-signed HTTPS/SSL certificate. See Replacing SSL Certificate for more information.
- To any version, saved show tech files are discarded (AOS-14416). To prevent file loss (in case they are subsequently needed) download show tech files before upgrading the Apstra server.
- To any version, configlets are not updated during upgrade. You must maintain configlets manually.
- To any version, built-in device profiles and interface maps are updated to the shipped built-in IMs and DPs of the target release without warning. Any changes in built-in DPs and IMs made by users are not reflected. This is known as AOS-13125.
- To any version, ensure there are no configuration deviation anomalies prior to starting the upgrade. If there are configuration deviation anomalies, the devices may restart processes which may cause traffic disruption on the device.
- To any version, ensure there are no devices in “Undeploy” deploy mode. Upgrade cannot proceed when some devices are set to Undeploy.
- To any version, user must delete Device AAA/TACACS configlets from the blueprint before upgrading the Apstra server, device agent, or NOS. You can re-apply the configlets after the upgrades.
Check release notes for known limitations and issues.
API Changes¶
All API changes introduced in version 2.3.1 are backward compatible with 2.3.0.
The following non-backward compatible API changes exist between 2.2.1 and 2.3.1:
Rack Types and Rack Based Templates¶
Advanced rack design, introduced in version 2.3, allows users to define rack types with non-MLAG leaf pairs, multiple MLAG leaf pairs, etc. The API for both rack types and rack-based templates have been modified in a breaking fashion. The breaking change is in the JSON payloads in the CRUD APIs.
POST/PUT/GET /api/design/rack-types
POST/PUT/GET /api/design/templates
Virtual Network Creation¶
Per-port virtual networks, introduced in version 2.3, allow users to create virtual network endpoints on individual interfaces of L2 servers that have multiple leaf-facing interfaces. For example, given an L2 server that has 1 link per leaf to a non-MLAG leaf pair, users can create virtual network endpoints on one or both of the leaf-facing server interfaces.
The virtual network facade API has been updated for this. The breaking change is in the JSON payload where the virtual network endpoints are specified. Instead of specifying a system node ID, API client must now specify an interface node ID.
Upgrading Apstra server VM In-Place¶
Make sure you have the following Apstra server permissions:
- AOS Operating System admin user privileges.
- AOS admin user group permissions.
Apstra server In-Place VM upgrade consists of:
- Pre-Upgrade Validation
- Apstra server In-Place Upgrade To AOS 3.2
- Apstra server In-Place Upgrade To AOS 3.1
- Apstra server Agents In-Place Upgrade
- Apstra server Cluster In-Place VM Upgrade
Pre-Upgrade Validation (Same VM)¶
Verify that the upgrade is supported for your version. See Supported Upgrade Paths.
Verify that you have the required VM resources for upgrading. Run
free -h
and verify < 50% utilization before starting the upgrade.admin@aos-server:~$ free -m total used free shared buff/cache available Mem: 64422 5663 54129 6 4628 58078 Swap: 4331 0 4331 admin@aos-server:~$
If utilization is > 50%, gracefully shutdown the Apstra server, add resources, then restart the Apstra server.
Validate the Apstra server health by logging in as admin and running the following command:
admin@aos-server:~$ service aos status * aos.service - LSB: Start AOS management system Loaded: loaded (/etc/init.d/aos; generated) Active: active (exited) since Sun 2019-10-20 19:45:08 UTC; 6min ago Docs: man:systemd-sysv-generator(8) Tasks: 0 (limit: 4915) CGroup: /aos.service admin@aos-server:~$
For each blueprint, review service and probe anomalies, resolve open anomalies as much as possible, and take notes of the remaining ones.
Perform a backup of the old Apstra server by running the
sudo aos_backup
command:admin@aos-server:~$ sudo aos_backup [sudo] password for admin: ==================================================================== Backup operation completed successfully. ==================================================================== New AOS snapshot: 2019-10-19_00-05-06 admin@aos-server:~$
Copy the backup files from
/var/lib/aos/snapshot/<snapshot_name>
to an external location.
Apstra Server In-Place Upgrade To Version 3.2¶
After you’ve performed the pre-upgrade validation, download the software installer .run image (e.g. aos_3.2.1-298.run) and transfer it to the Apstra server.
admin@aos-server:~$ ls -l *run -rw------- 1 root root 800916510 Apr 2 17:45 aos_3.2.1-298.run
Start the upgrade by running the installer .run (e.g. aos_3.2.1-298.run) image.
admin@aos-server:~$ sudo bash aos_3.2.1-298.run [sudo] password for admin: Verifying archive integrity... All good. Uncompressing AOS installer 100% ===================================================================== Backup operation completed successfully. ===================================================================== AOS[2020-04-09_23:12:33]: Loading AOS 3.2.1-298 image AOS[2020-04-09_23:13:32]: Initiating upgrade pre-checker AOS[2020-04-09_23:13:33]: Initiating docker library import DONE AOS[2020-04-09_23:14:31]: Preparing to retrieve data from running AOS Server. DONE AOS[2020-04-09_23:14:52]: Retrieving data from running AOS Server. This step can take up to 10 minutes DONE AOS[2020-04-09_23:17:24]: Importing retrieved state to AOS pre-checker. This step can take up to 20 minutes DONE Waiting for blueprint <evpn-veos-virtual> processing to finish.. Done Summary saved to /tmp/aos-upgrade-config-summary-2020.04.09-231738
(new in version 3.2) Review a summary of configuration pushed to devices during this upgrade. Page through output then hit q.
AOS Upgrade Summary ======================================================================= This is a summary of configuration pushed to devices logically grouped into sections. Use 'q' to exit this view. For more device specific configurations, use the menu after quitting this view BLUEPRINT: evpn-veos-virtual (test-evpn.veos.2485377892357-1139715540 - evpn-veos-virtual) Section Systems ================================================================================================================= UPGRADE_BGP_AF_ROUTE_MAPS spine1 [5054003468A8, 172.20.98.12] ~~~~~~~~~~~~~~~~~~~~~~~~~ leaf1 [505400DBF2ED, 172.20.98.11] Move route-maps from 'router bgp' context into per-address-family and spine2 [5054007AA372, 172.20.98.13] per-neighbor route-maps to account for advanced external routing policy leaf3 [5054000F612A, 172.20.98.14] config. Without this change, route-map policies are ambiguous between leaf2 [5054004ED91F, 172.20.98.15] ipv4, ipv6, and evpn address-families. This operation is performed atomically using EOS 'configure session' feature to prevent traffic loss during the change when the configuration is temporarily removed. configure session bgp_af_route_map_upgrade router bgp 64514 default neighbor 172.16.0.13 route-map MlagPeer out default neighbor 198.51.100.2 route-map RoutesFromExt in default neighbor 198.51.100.2 route-map RoutesToExt out default neighbor l3clos-l route-map AdvLocal out address-family ipv4 neighbor 172.16.0.13 route-map MlagPeer out neighbor 198.51.100.2 route-map RoutesFromExt in neighbor 198.51.100.2 route-map RoutesToExt out neighbor l3clos-l route-map AdvLocal out exit default neighbor l3clos-l route-map AdvLocal out address-family ipv6 neighbor l3clos-l route-map AdvLocal out exit vrf blue default neighbor 172.16.0.15 route-map MlagL3PeerIn in default neighbor 172.16.0.15 route-map MlagL3PeerOut out default neighbor 198.51.100.2 route-map RoutesFromExt-blue in default neighbor 198.51.100.2 route-map RoutesToExt-blue out address-family ipv4 neighbor 172.16.0.15 route-map MlagL3PeerIn in neighbor 172.16.0.15 route-map MlagL3PeerOut out neighbor 198.51.100.2 route-map RoutesFromExt-blue in neighbor 198.51.100.2 route-map RoutesToExt-blue out exit exit vrf blue default neighbor fc01:a05:198:51:100::2 route-map RoutesFromExt-blue in default neighbor fc01:a05:198:51:100::2 route-map RoutesToExt-blue out default neighbor fc01:a05:fab::7 route-map MlagL3PeerIn in default neighbor fc01:a05:fab::7 route-map MlagL3PeerOut out address-family ipv6 neighbor fc01:a05:198:51:100::2 route-map RoutesFromExt-blue in neighbor fc01:a05:198:51:100::2 route-map RoutesToExt-blue out neighbor fc01:a05:fab::7 route-map MlagL3PeerIn in neighbor fc01:a05:fab::7 route-map MlagL3PeerOut out exit exit vrf red default neighbor 172.16.0.17 route-map MlagL3PeerIn in default neighbor 172.16.0.17 route-map MlagL3PeerOut out default neighbor 198.51.100.2 route-map RoutesFromExt-red in default neighbor 198.51.100.2 route-map RoutesToExt-red out address-family ipv4 neighbor 172.16.0.17 route-map MlagL3PeerIn in neighbor 172.16.0.17 route-map MlagL3PeerOut out neighbor 198.51.100.2 route-map RoutesFromExt-red in neighbor 198.51.100.2 route-map RoutesToExt-red out exit exit vrf red default neighbor fc01:a05:198:51:100::2 route-map RoutesFromExt-red in default neighbor fc01:a05:198:51:100::2 route-map RoutesToExt-red out default neighbor fc01:a05:fab::d route-map MlagL3PeerIn in default neighbor fc01:a05:fab::d route-map MlagL3PeerOut out address-family ipv6 neighbor fc01:a05:198:51:100::2 route-map RoutesFromExt-red in neighbor fc01:a05:198:51:100::2 route-map RoutesToExt-red out neighbor fc01:a05:fab::d route-map MlagL3PeerIn in neighbor fc01:a05:fab::d route-map MlagL3PeerOut out exit exit exit commit no configure session bgp_af_route_map_upgrade configure terminal MLAG_PEER_POLICY_COMMUNITIES leaf1 [505400DBF2ED, 172.20.98.11] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ leaf2 [5054004ED91F, 172.20.98.15] AOS 3.2.0 adds support for OSPF connectivity points. As part of loop-prevention, BGP communities are used to satisfy loop prevention for mutual redistribution between OSPF and BGP. When OSPF redistributed routes are advertised across an MLAG Peer link (SVI or L3 Peer link), these community values must be preserved to be used by table-map filtering. This upgrade plugin ensures mlag leafs send BGP communities between each other. Future AOS releases will also increase usage of communities. router bgp 64514 neighbor mlag-peer send-community neighbor mlag-peer send-community extended exit (END)
(new in version 3.2) Use the upgrade interactive menu to display config change summary, list all devices with config changes, dump all config changes, and so on. You can continue with the upgrade, or quit the upgrade.
AOS Upgrade: Interactive Menu ================================================== <Device SN> - display config changes using a specific device serial number (s)ummary - display config change summary (l)ist - list all devices with config changes (d)ump - dump all config changes to a file (c)ontinue - continue with AOS upgrade (q)uit - quit AOS upgrade aos-upgrade (h for help)# s aos-upgrade (h for help)# l Blueprint: evpn-veos-virtual (test-evpn.veos.2485377892357-1139715540 - evpn-veos-virtual) spine1 [5054003468A8, 172.20.98.12] leaf1 [505400DBF2ED, 172.20.98.11] spine2 [5054007AA372, 172.20.98.13] leaf3 [5054000F612A, 172.20.98.14] leaf2 [5054004ED91F, 172.20.98.15] aos-upgrade (h for help)# d Dumping all configs to /tmp/aos-upgrade-configs-2020.04.09-232155 BLUEPRINT: evpn-veos-virtual (test--evpn.veos.2485377892357-1139715540 - evpn-veos-virtual) [1/5] 23:21:55 system: 5054003468A8 [2/5] 23:22:05 system: 505400DBF2ED [3/5] 23:22:07 system: 5054007AA372 [4/5] 23:22:08 system: 5054000F612A [5/5] 23:22:10 system: 5054004ED91F aos-upgrade (h for help)#
Note
(d)ump : It dumps all config changes to a file. As of version 3.2.1, it takes about 4~5 seconds per device, i.e. 4~5 minutes for 60 devices, only for the first time, due to AOS bug (AOS-16728).
Warning
Upgrading the Apstra server is a disruptive process. When upgrading to the same VM, if you select c to continue, you cannot rollback the upgrade. The only way to return to the previous version is to reinstall a new VM with the previous version and restore the database from backup.
Successful upgrade is confirmed. You can also check the current version in the web interface at Platform > About.
aos-upgrade (h for help)# h AOS Upgrade: Interactive Menu ================================================== <Device SN> - display config changes using a specific device serial number (s)ummary - display config change summary (l)ist - list all devices with config changes (d)ump - dump all config changes to a file (c)ontinue - continue with AOS upgrade (q)uit - quit AOS upgrade aos-upgrade (h for help)# c AOS[2020-04-09_23:23:21]: Loading AOS Device Installer image AOS[2020-04-09_23:24:40]: Stopping upgrade pre-checker cc4590d6a718: Loading layer [==================================================>] 65.58MB/65.58MB 8c98131d2d1d: Loading layer [==================================================>] 991.2kB/991.2kB 03c9b9f537a4: Loading layer [==================================================>] 15.87kB/15.87kB 1852b2300972: Loading layer [==================================================>] 3.072kB/3.072kB 583f37d385c1: Loading layer [==================================================>] 85.08MB/85.08MB fc0141fa6aa5: Loading layer [==================================================>] 3.584kB/3.584kB Loaded image: nginx:1.14.2-upload-echo AOS[2020-04-09_23:25:11]: Removing installed (3.1.1-179) AOS package AOS[2020-04-09_23:25:12]: Installing AOS 3.2.1-298 package ===================================================================== AOS upgrade successful. Please find logs at: /var/tmp/aos_upgrade_logs_20200409_232331.tgz ===================================================================== admin@aos-server:~$ admin@aos-server:~$ service aos show_version 3.2.1-298
Apstra Server In-Place Upgrade To Version 3.1¶
After you’ve performed the pre-upgrade validation, download the software installer .run image (e.g. aos_3.1.1-179.run) and transfer it to the Apstra server.
admin@aos-server:~$ ls -l total 810076 -rw------- 1 admin admin 829510572 Oct 15 20:45 aos_3.1.1-179.run admin@aos-server:~$
Start the upgrade by running the installer .run (e.g. aos_3.1.1-179.run) image.
admin@aos-server:~$ sudo bash aos_3.1.1-179.run Verifying archive integrity... All good. Uncompressing AOS installer 100% ===================================================================== Backup operation completed successfully. ===================================================================== Loading AOS 3.1.1-179 image Initiating upgrade pre-checker AOS[2019-10-21_04:16:09]: Initiating docker library import DONE AOS[2019-10-21_04:16:29]: Preparing to retrieve data from running AOS Server. DONE AOS[2019-10-21_04:20:19]: Retrieving data from running AOS Server. This step can take up to 10 minutes DONE AOS[2019-10-21_04:26:52]: Importing retrieved state to AOS pre-checker. This step can take up to 20 minutes DONE
Review any device configuration changes planned to be pushed after the upgrade. Page through output then hit q. Then answer (y/n) to continue.
Warning
This is a disruptive process. When upgrading to the same VM, if you select y to continue, you cannot rollback the upgrade. The only way to return to the previous version is to reinstall a new VM with the previous version and restore the AOS database from backup.
Device configuration will be updated for the following device(s): ================================================================================ Device: 525400144A92 (FQDN: l2-virtual-ext-001-leaf1, Management IP Address: 172.20.51.9) ================================================================================ Additional configuration that would be pushed on device agent upgrade: { "hostname": [{ "filename": "/etc/hostname", "data": [ "l2-virtual-ext-001-leaf1" ] }, { "command": "/bin/hostname -F /etc/hostname" }, { "command": "systemctl reset-failed lldpd.service" }, { "command": "/usr/sbin/service lldpd restart" } ], "interfaces": [{ "filename": "/etc/network/interfaces", "data": [ "# This file was generated by AOS. Do not edit by hand.", "#", "# The loopback interface", "auto lo", "iface lo inet loopback", " address 10.0.0.0/32", "", "# Fabric interfaces", "auto swp1", "iface swp1", " address 10.0.0.7/31", " alias facing_spine1:swp1", "", "auto swp2", "iface swp2", " address 10.0.0.15/31", " alias facing_spine2:swp1", "", "# L3Edge interfaces", : Do you want to continue?(y/n):
Successful upgrade is confirmed. You can also check the version in the web interface at Platform > About.
Loading AOS Device Installer image a1aa3da2a80a: Loading layer [==================================================>] 65.56MB/65.56MB ef1a1ec5bba9: Loading layer [==================================================>] 991.2kB/991.2kB 6c3332381368: Loading layer [==================================================>] 15.87kB/15.87kB e80c789bc6ac: Loading layer [==================================================>] 3.072kB/3.072kB 21e2934acccd: Loading layer [==================================================>] 86.59MB/86.59MB 43fd92dffcd9: Loading layer [==================================================>] 3.584kB/3.584kB Loaded image: nginx:1.14.2-upload Removing installed (3.0.1-96) AOS package Installing AOS 3.1.1-179 package ===================================================================== AOS upgrade successful. Please find logs at: /var/tmp/aos_upgrade_logs_20191021_043637.tgz ===================================================================== admin@aos-server:~$ service aos show_version 3.1.1-179 admin@aos-server:~$
Apstra Server Agents In-Place Upgrade¶
When you upgrade the Apstra server to a new version, you must also upgrade the device agents version to match the Apstra server version.
Version 3.2¶
Log in to the web interface as admin.
Navigate to Devices > System Agents > Agents, select the devices that require upgrading, and click the ‘Install’ button. This re-initiates the installation process, which includes a version check. If there is a version mismatch, the agent will automatically be upgraded.
On the Dashboard, under Liveness, verify there are no anomalies shown for the upgraded devices.
Versions 3.1 and Earlier¶
Log in to the web interface as admin.
Navigate to Devices > System Agents > Agents, and enable the system agents again, in a rolling fashion, to trigger the upgrade of the system agents.
Wait for the upgrade to complete.
Verify that the new version is running on the system agents.
When no liveness anomalies are present, you have successfully updated the Apstra server.
Note
If you need to roll back to the previous version, you must build a new VM with the previous version and restore the configuration to that VM. For assistance, please contact Support.
AOS Cluster In-Place VM Upgrade¶
Upgrade process steps described above can also be used for In-Place VM upgrade for AOS Cluster. The workflow to upgrade AOS Cluster will be as:
- Pre-Upgrade Validation
- Download the aos.run file on the Apstra controller and worker nodes.
- Install the aos.run in the Apstra controller node first. Once the installation is complete, the worker nodes will disconnect from the Apstra controller, and the state of worker node changes to FAILED.
- For on-box system agents, refer to AOS System Agents In-Place Upgrade
- Then install the aos.run file in the worker AOS nodes. Once the installation is complete, the worker nodes will reconnect to the Apstra controller and the state will show ACTIVE in AOS Cluster.
Note
FAILED state means that off-box agents and IBA probe containers located on that worker node will be unavailable, but devices managed by the off-box agents will remain in service.
Note
For in-place upgrade, post upgrade any off-box system agents will immediately apply any device configuration changes resulting from the upgrade.
Upgrading Apstra Server onto Different VM (VM-VM)¶
The upgrade operation requires the following AOS System permissions:
- AOS Operating System admin user privileges.
- AOS admin user group permissions.
Upgrading the Apstra server onto a different VM consists of:
- Pre-Upgrade Validation
- Deploy New Apstra server
- Import State
- Apstra server cluster Upgrade to a Different VM
- Apstra server Rollback (in case of failure)
- System Agents Upgrade
- Proxy and DNS updates, Shutdown of old AOS Server
Pre-Upgrade Validation (Different VM)¶
Verify whether the upgrade is supported for your version. See Supported Upgrade Paths.
Validate system health by logging in as admin and running the following command:
admin@aos-server:~$ service aos status * aos.service - LSB: Start AOS management system Loaded: loaded (/etc/init.d/aos; generated) Active: active (exited) since Mon 2019-01-21 22:11:36 UTC; 22h ago Docs: man:systemd-sysv-generator(8) Process: 6465 ExecStop=/etc/init.d/aos stop (code=exited, status=0/SUCCESS) Process: 6828 ExecStart=/etc/init.d/aos start (code=exited, status=0/SUCCESS)
For each blueprint, review service and probe anomalies, resolve open anomalies as much as possible, and take notes of the remaining ones.
Log in as root, and perform a backup of the old Apstra server by running the
sudo aos_backup
command:root@aos-server:/# aos_backup ==================================================================== Backup operation completed successfully. ==================================================================== New AOS snapshot: 2019-01-08_15-24-15
Copy the backup files from
/var/lib/aos/snapshot/<snapshot_name>
to an external location.
Deploy New Apstra Server (Different VM)¶
Note
The Apstra server upgrade procedure does not include the migration of any
customization done in /etc/aos/aos.conf
file. If you had performed any
customization of this file in the current Apstra server that needs to be migrated
(ex: updated the metadb
field to use a different network interface) you must
re-apply the change yourself in the new Apstra server VM.
Download the Apstra server image from the portal.
Deploy a new Apstra server VM and configure the new Apstra server VM with a new IP address (same or new FQDN may be used), See Apstra Server Installation for instructions.
Note
Make sure your new VM has sufficient server resources. See Apstra Server VM Resources for more information.
Verify that the new Apstra server has SSH access to the old Apstra server.
Verify that the new Apstra server can reach the Systems Agents. See Network Security Protocols.
Verify that the new Apstra server can reach any used external system (ex: NTP, DNS, vSphere server, LDAP or TACACs+ server, etc).
Import State (Different VM)¶
Note
Avoid API/GUI write operations on the old Apstra server during and after the upgrade step (see next steps) as these updates won’t be automatically copied over to the new Apstra server.
Log in to the new Apstra server VM as admin.
Start the “Import State” operations on the new Apstra server by running the
sudo aos_import_state
script where –ip-address is the old Apstra server IP address. You will be prompted for the SSH admin password and for the root password of the old Apstra server.root@aos-server:/home/admin# aos_import_state --ip-address 10.10.10.10 --username admin SSH password for remote AOS VM: Root password for remote AOS VM: AOS[20190108_232845]: Preparing to retrieve data from remote AOS Server. AOS[20190108_232858]: Retrieving data from remote AOS Server. This step can take upto 10 minutes AOS[20190108_232958]: Successfully retrieved data from remote AOS Server. AOS[20190108_232959]: Importing retrieved state to AOS. This step can take upto 20 minutes
Note
The size of the blueprint and the Apstra server VM resources determine how long it takes to complete. As of version 3.2.2-12, if the database import exceeds twenty minutes, the operations may fail as “Timed Out” error. In such a case, the timeout value (seconds) can be extended as follows.
root@aos-server:/home/admin# AOS_UPGRADE_DOCKER_EXEC_TIMEOUT=3000 aos_import_state --ip-address 10.10.10.10 --username admin
Contact Apstra Global Support for more information.
You will be prompted to approve the device configuration changes. Review any device configuration changes planned to be pushed after the upgrade.
Note
To come out of the config review mode and continue, press q.
================================================================================ Device: 525400001DD0 (FQDN: spine-1, Management IP Address: 172.20.182.14) ================================================================================ Additional configuration that would be pushed on device agent upgrade: interface Ethernet1 description facing_rack-002-leaf1:Ethernet3 ! interface Ethernet2 description facing_rack-001-leaf2:swp4 ! interface Ethernet3 description facing_rack-001-leaf1:swp2 ! ================================================================================ ================================================================================ Device: 52540009BE9D (FQDN: spine-2, Management IP Address: 172.20.182.13) ================================================================================ Additional configuration that would be pushed on device agent upgrade: interface Ethernet1 description facing_rack-002-leaf1:Ethernet4 ! interface Ethernet2 description facing_rack-001-leaf2:swp3 ! interface Ethernet3 description facing_rack-001-leaf1:swp4 ! ================================================================================ All existing onbox system agents will be disabled AOS controller will not automatically upgrade AOS device agents. Use system agents to upgrade AOS device agents. The incremental configurations that will be pushed to the device is saved at /tmp/tmpJL92fp
(new in version 3.2) Use the Upgrade Interactive Menu to display config change summary, list all devices with config changes, dump all config changes, and so on. You can continue with the upgrade, or quit the upgrade from the menu.
AOS Upgrade: Interactive Menu ================================================== <Device SN> - display config changes using a specific device serial number (s)ummary - display config change summary (l)ist - list all devices with config changes (d)ump - dump all config changes to a file (c)ontinue - continue with AOS upgrade (q)uit - quit AOS upgrade aos-upgrade (h for help)#
(Versions 3.1 and earlier) Approve the configuration changes and proceed by entering y to the next question (press “n” to interrupt the upgrade).
root@aos-server:/home/admin# aos_import_state --ip-address 172.20.145.3 --username admin SSH password for remote AOS VM: Root password for remote AOS VM: AOS[20190109_214903]: Preparing to retrieve data from remote AOS Server. AOS[20190109_214915]: Retrieving data from remote AOS Server. This step can take upto 10 minutes AOS[20190109_214926]: Successfully retrieved data from remote AOS Server. AOS[20190109_214927]: Importing retrieved state to AOS. This step can take upto 20 minutes Do you want to continue?(y/n):
Verify that the prompt shows “Importing state successful” message.
AOS[20190109_221646]: Importing state successful. AOS[20190109_221646]: Please find logs at /var/tmp/aos_upgrade_logs_20190109_221646.tgz
Log in as admin into the new Apstra server web interface and verify that all system agents are in “Disabled” mode.
Note
At this point, it is expected to have liveness anomalies raised against the system agents until the complete upgrade is complete.
Cluster Upgrade to Different VM (using Import State)¶
The aos_import_state script can be used for two scenarios when upgrading a cluster.
Scenario-1: New VM for the controller node with reuse of worker node VMs.
Scenario-2: New VMs for controller node with new worker node VMs.
For Scenario-1, When a new VM is used for the controller node VM and the worker VMs are being reused, the workflow to follow is:
- Pre-Upgrade Validation
- Deploy the new Apstra server VM only for the controller node with the required version and resources. See Deploy New Apstra Server
- Import the state from the running instance to the new instance using the aos_import_state script. Once the import is complete, the upgraded instance shows the state of worker nodes as INACTIVE. Run aos_import_state script on the new instance controller node. See Import State
- Upgrade system agents. See System Agents Upgrade.
- Download the .run file for the new version in the worker VMs.
- Execute the .run file for the new version in the worker nodes. Once upgrade is complete for worker nodes, the cluster state changes to ACTIVE.
- Remove the old controller VM.
For Scenario-2, When new VMs are used for the controller node VM and for the worker VMs, the workflow to follow is :
Create new Apstra server VMs for the controller and worker nodes with the required version. For example, if the existing deployment has 2 worker VMs and a controller, create 3 new VMs. See Deploy New Apstra Server.
Designate one of the new VMs to be the new controller.
Execute
aos_import_state
from the new controller VM. Specify the –cluster-node-address-mapping argument to map between old cluster worker VMs and the new cluster worker VMs. See Import State If there were two worker nodes with IP 1.1.1.1 and 1.1.1.2 in old cluster and the new cluster worker node VMs have IP 2.2.2.1 and 2.2.2.2, specify the argument asaos_import_state --ip-address <old_aos_controller> --username admin --password admin --cluster-node-address-mapping 1.1.1.1 2.2.2.1 --cluster-node-address-mapping 1.1.1.2 2.2.2.2.
Upgrade system agents See System Agents Upgrade
After completion of aos_import_state script and system agents upgrade, the new controller node shows the cluster with new worker node IPs.
Remove the old controller and worker VM nodes.
Note
It is possible to specify a subset of worker VMs as well in Step 2. That is, in the above scenario, if the argument –cluster-node-address-mapping 1.1.1.2 2.2.2.2 is not specified, then the new Apstra controller VM will come up with worker nodes as 2.2.2.1 and 1.1.1.2. To upgrade the worker node in 1.1.1.2, the .run file can be used.
Apstra Server Rollback (Different VM)¶
In the case of a failed Apstra server upgrade, or other malfunction, you may rollback to the old Apstra server VM, as long as the system agents upgrade has not yet started. To do so:
- Gracefully shutdown the new Apstra server.
- Start the old Apstra server VM.
System Agents Upgrade (Different VM)¶
The agent upgrade process for a different VM is the same as for a same-VM upgrade. See System Agents Upgrade.
Proxy and DNS updates, Shutdown of old Apstra Server (Different VM)¶
- Update any DNS entries to include the new Apstra server IP/FQDN based on your configuration.
- If a proxy is used for the Apstra server, make sure the proxy now points to the new Apstra server.
- Gracefully shutdown the old Apstra server VM. You have successfully upgraded the Apstra server.
Updating SSH Host Keys¶
Warning
In versions 3.2.x and earlier, due to the way the Apstra server VM is packaged, all Apstra server instances installed from the same OVA/QCOW image have the same SSH host keys.
As a result, this issue allows an attacker to more easily bypass remote host verification when a user connects by SSH to what is believed to be a previously used host but is really the attacker’s host performing a man-in-the-middle attack.
To update the the SSH Host Keys on a Apstra server, follow this procedure to generate new SSH Host Keys for a new or existing Apstra server VM:
- Remove the existing SSH host keys.
admin@aos-server:~$ sudo rm /etc/ssh/ssh_host*
- Configure new SSH host keys.
admin@aos-server:~$ sudo dpkg-reconfigure openssh-server Creating SSH2 RSA key; this may take some time ... 2048 SHA256:EWRFcs4V6BmOILR3T2Psxng1uE0qXQ/z9IKkXrnLpJs root@aos-server (RSA) Creating SSH2 ECDSA key; this may take some time ... 256 SHA256:THaXEia8VW6Jfw6OBXFegu1Cav0zcGSVOy9RkNOPxf4 root@aos-server (ECDSA) Creating SSH2 ED25519 key; this may take some time ... 256 SHA256:0HOn0nnF+7oRaF5HggI4vWeyxT+UNsHcbvNpBJdaKhQ root@aos-server (ED25519)
- Restart the SSH server process.
admin@aos-server:~$ sudo systemctl restart ssh
Q & A¶
- Why are System Agents temporarily in Disabled mode in the new Apstra server after the upgrade? It tells the Apstra server to not perform any commit onto the System Agents as the System Agents are running an older version. By enabling the System Agents again in the new Apstra server, the System Agents gets upgraded to match Apstra server version.
- Will the device configurations change after the Apstra server upgrade? Device configuration changes may be suggested when running the upgrade script: manually verify and commit these changes if accepted.
- How does getting a new IP impact existing VMware vSphere integration? Make sure any firewall between the new Apstra server and the vSphere server is allowing the connection. No other impact as Apstra server initiates the connection with vSphere Server, not vice-versa.