Upgrade Routing Director
The upgrade functionality provided by Deployment Shell enables you to upgrade your Routing Director installation and all the applications running on it to the latest release.
You can upgrade to the current Juniper Routing Director Release 2.6.0 from the following releases.
The upgrade paths to release 2.6.0 from your existing older release are listed below.
-
2.5.0 → 2.6.0
-
2.4.0 or 2.4.1 → 2.6.0
-
2.3.0 → 2.4.1 or 2.5.0 → 2.6.0
-
2.2.0 → 2.4.1 → 2.6.0
-
2.1.0 → 2.2.0 → 2.4.1 → 2.6.0
Here → indicates a direct upgrade.
The upgrade process is automated by a set of Deployment Shell commands and carries out the required system checks, retrieves the upgrade package, and executes the upgrade process on the cluster nodes. You can upgrade using a file that is either downloaded locally on your primary node or downloaded directly from a Web page.
During an upgrade, it is important that no change activities including onboarding of devices, provisioning of services or changing other configurations are done in the system. The upgrade will automatically reboot all components and there will be short unavailability during that time. The upgrade process does not affect the traffic through the network and once the upgrade is complete, the devices and services are not reconfigured.
We recommend that you back up your configuration before upgrading. For information on backing up your current configuration, see Back Up Release 2.5.0 or Back Up Release 2.4.0.
You can upgrade to the current release using the upgrade_routing-director-release-build-ID.tgz compressed archive file. If your existing release is release 2.5.0, you can also upgrade using the upgrade_routing-director-release-build-ID.img disk image file. If you are downloading the upgrade file locally, note that the .img is larger than the .tgz file. Both the .img and .tgz files are deleted if the cluster upgrade is successful.
Perform the following steps to upgrade to Routing Director Release 2.6.0:
Upgrade Prerequisites—Ensure that all upgrade prerequisites are met
Upgrade the Routing Director Deployment Cluster—Upgrade the cluster using either Upgrade using the local filename Option or Upgrade using the remote url Option
Upgrade Deployment Shell and the OVA System Files—Upgrade Deployment Shell and the OVA system files on all the cluster nodes
Post Cluster Upgrade Tasks—Perform all the post cluster upgrade tasks to complete the upgrade process.
Upgrade Prerequisites
Before you upgrade the Routing Director deployment cluster, ensure the following.
-
Deployment Shell is accessible and operational.
-
The VM disk size is increased to match the recommended system requirements. Perform the steps described in Increase VM Disk Size.
-
The cluster nodes have the following free disk space available:
-
If you are using the .img image upgrade file to upgrade your cluster:
-
All the primary nodes must have 15% of the total disk space + the same amount space as the upgrade file size free.
-
The worker node must have 15% of the total disk space free.
-
-
If you are using the .tgz compressed upgrade file to upgrade your cluster:
-
The primary node from which the cluster was deployed must have 15% of the total disk space + three times the upgrade file size free.
-
The other two primary and worker nodes must have 15% of the total disk space + the same amount space as the upgrade file size free.
-
The worker node must have 15% of the total disk space free.
-
-
-
(Optional) Check the current build and OVA version of your existing release from Deployment Shell using the
show deployment versioncommand.
Upgrade the Routing Director Deployment Cluster
Perform the following steps if you want to upgrade to the current 2.6.0 release from a supported older release.
You can upgrade your installation and all the applications running on it using any
one of the
local
filename or remote url options:
Upgrade using the local filename Option
Use this option for air-gapped environments where your Routing Director installation does not have access to the Internet. However, you need to be able to copy the upgrade and upgrade signature files to your primary node.
To upgrade to release 2.6.0, perform the following steps.
Log in as root user to the primary node from which the current cluster was installed. You are logged in to Deployment Shell.
Type
exitto exit from Deployment Shell to the Linux root shell.Copy either the upgrade_routing-director-release-build-ID.img file or the upgrade_routing-director-release-build-ID.tgz file to the /root/epic/temp location on the node.
(Optional) If you want to validate the signature of the upgrade file, copy the corresponding upgrade_routing-director-release-build-ID.img.psig signature file or the upgrade_routing-director-release-build-ID.tgz.psig signature file to the same /root/epic/temp folder.
(Optional) Use the
gpg --verify img-or-tgz-psig-file img-or-tgz-filecommand to validate the digital signature of the upgrade file. For example, use the following command to validate the upgrade .img file:root@primary1:~/epic/temp# gpg --verify routing-director-release-2.6.0.9889.g1ea61da822.img.psig routing-director-release-2.6.0.9889.g1ea61da822.img gpg: Signature made Wed Sep 03 17:05:12 2025 UTC gpg: using RSA key 4B7B22C9C4FA32CF gpg: Good signature from "Northstar Paragon Automation 2024 ca@juniper.net" [ultimate]
Here
primary1is the installer primary node. Validation takes a couple of minutes to complete.Type
clito enter Deployment Shell.Use the following command to upgrade the Routing Director deployment cluster:
request paragon cluster upgrade local filename upgrade_routing-director-release-build-ID.img-or-tgzFor example, use the following command to upgrade using the .img file:
root@primary1> request paragon cluster upgrade local filename routing-director-release-2.6.0.9889.g1ea61da822.img Checking paragon cluster system health before proceeding with cluster upgrade. This will take a minute... Warning: 'psql' is not installed or not in your PATH. Skipping Postgres operations. 2025-09-03 17:07:39 Health status checking... ====================================================== Overall cluster status ====================================================== GREEN 2025-09-03 17:19:18 Health status checking completed! ======================================================= Paragon cluster is healthy. Proceed with Paragon cluster upgrade. Using local file /root/epic/temp/upgrade_routing-director-release-2.6.0.9889.g1ea61da822.img for upgrade Upgrade is in progress ... Updated to build: routing-director-release-2.6.0.9889.g1ea61da822 Paragon Cluster upgrade is successful! Run 'request paragon health-check' command to check current system health with upgraded Paragon cluster. Please continue to primary host node to upgrade Paragon-shell Shell and update OVA system files by: /root/epic/upgrade_paragon-shell_ova-system.sh
Here
primary1is the installer primary node. The upgrade command checks the health of the cluster before upgrading. If the cluster health check returns aGREENstatus, the cluster is upgraded requiring no further input. If the cluster health check returns aREDstatus, the cluster is not upgraded. If the cluster health check returns anAMBERstatus, you are prompted to choose to continue or stop the upgrade.Additional upgrade command options:
You can also use any one or more of the following command options along with the upgrade command while upgrading:
no-confirm—Usage example:request paragon cluster upgrade local filename upgrade_routing-director-release-build-ID.img-or-tgz no-confirmUse the
no-confirmoption to ignore theAMBERstatus and continue with the upgrade without being prompted. However, theno-confirmoption does not ignore aREDstatus.detach-process—Usage example:request paragon cluster upgrade local filename upgrade_routing-director-release-build-ID.img-or-tgz detach-processAs the upgrade process takes over an hour to complete, you can let the upgrade run in the background and free up the CLI screen for any other tasks. The command runs the initial health checks and then proceeds with the upgrade. Once the upgrade process starts, the process is detached and moved into the background and you are returned to the command prompt. The upgrade output is logged in the /epic/temp/upgrade.log file. To monitor the status of the upgrade process and print the output onscreen, use the
monitor start /epic/temp/upgrade.logcommand. When the upgrade process completes, a success message similar to the following is displayed on all the cluster nodes:Paragon Cluster upgrade is successful! - Run 'request paragon health-check' command to check current system health with upgraded Paragon cluster. - Please continue to primary host node to upgrade Paragon-shell and update OVA system files by: /root/epic/upgrade_paragon-shell_ova-system.sh
If you get disconnected from the VM during the upgrade process, you can periodically check the upgrade log file for status on the upgrade.
input—Usage example:request paragon cluster upgrade local filename upgrade_routing-director-release-build-ID.img-or-tgz input input-stringUse the
inputoption to pass additional Ansible input parameters to the upgrade command. For example, if you want to enable verbose logging during upgrade, use the-voption.request paragon cluster upgrade local filename upgrade_routing-director-release-build-ID.img-or-tgz input "-v"
Your Routing Director installation and all the applications running on it are upgraded.
Note that, the upgrade process takes over an hour to complete. If you get disconnected from the VM during the upgrade process, you can periodically check the upgrade log file until you see an output similar to this:
root@primary1:~# cat /root/upgrade/upgrade.log <output snipped> … PLAY [Mark installation as complete] ******************************************* TASK 3420 [Record installation status] ***************************************** Wednesday 03 September 2025 19:35:31 +0000 (0:00:01.516) 2:15:11.072 *** changed: [10.1.2.3] PLAY RECAP ********************************************************************* 10.1.2.3 : ok=2664 changed=672 unreachable=0 failed=0 rescued=0 ignored=2 10.1.2.4 : ok=221 changed=40 unreachable=0 failed=0 rescued=0 ignored=0 10.1.2.5 : ok=221 changed=40 unreachable=0 failed=0 rescued=0 ignored=0 10.1.2.6 : ok=206 changed=37 unreachable=0 failed=0 rescued=0 ignored=0 Wednesday 03 September 2025 19:35:33 +0000 (0:00:01.826) 2:15:12.899 *** =============================================================================== user-registry : Push Docker Images from local registry to paragon registry - 627.52s jcloud/airflow2 : Install Helm Chart ---------------------------------- 230.85s Install Helm Chart ---------------------------------------------------- 196.00s delete existing install config-map - if any --------------------------- 184.01s Save installer config to configmap ------------------------------------ 173.67s Create Kafka Topics --------------------------------------------------- 133.83s user-registry : Push Helm Charts to paragon registry ------------------ 115.41s jcloud/papi : Install Helm Chart -------------------------------------- 101.72s paragon-shell-config : Load paragon-shell initial configs on master node -- 76.12s kubernetes/multi-master-rke2 : restart rke2 server on 1st master ------- 76.07s systemd ---------------------------------------------------------------- 75.46s kubernetes/addons/helper-commands : Install Pathfinder Utility scripts -- 59.76s Install Helm Chart ----------------------------------------------------- 54.64s jcloud/common : Create jcloud namespaces ------------------------------- 50.59s kubernetes/multi-master-rke2 : restart rke2 server on other master ----- 49.66s kubernetes/addons/helper-commands : Copy profiler to /opt/paragon/bin -- 48.08s kubernetes/addons/resource-reservation : Apply resource-reservation kube-cfg file -- 45.98s Wait and verify readiness for each workload ---------------------------- 42.67s kubernetes/addons/resource-reservation : patch default sa in all ns ---- 41.98s kubernetes/addons/arangodb : Wait for Arango Operator fully terminated -- 35.63s Playbook run took 0 days, 2 hours, 15 minutes, 12 seconds Application Cluster upgraded to version build: routing-director-release-2.6.0.9889.g1ea61da822 Routing Director cluster upgrade is successful on host node!
Upgrade Deployment Shell and the OVA system files.
Upgrade using the remote url Option
Use this option if your Routing Director installation has access to the Internet and the upgrade file is in a remote location.
To upgrade release 2.6.0, perform the following steps.
Log in as root user to the primary node from which your existing cluster was installed. You are logged in to Deployment Shell.
Use the following command to upgrade the Routing Director deployment cluster:
request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img.or.tgz?query_string"For example, use the following command to upgrade using the .img file:
root@primary1> request paragon cluster upgrade remote url "https://cdn.juniper.net/software/routing-director-images/routing-director-release-2.6.0.9889.g1ea61da822.img?query_string" Checking paragon cluster system health before proceeding with cluster upgrade. This will take a minute... Warning: 'psql' is not installed or not in your PATH. Skipping Postgres operations. 2025-08-28 19:07:39 Health status checking... ====================================================== Overall cluster status ====================================================== GREEN 2025-08-28 19:18:59 Health status checking completed! ======================================================= Paragon cluster is healthy. Proceed with Paragon cluster upgrade. Upgrading paragon cluster from https://cdn.juniper.net/software/routing-director-images/ Downloading upgrade file routing-director-release-2.6.0.9889.g1ea61da822.img Download file size: 39,370,883,072 bytes Current disk Usage: Total: 422,144,110,592 bytes Used: 170,895,372,288 bytes Available: 232,979,087,360 bytes Please wait for current download to finish... (File is large. It may take a while.) Upgrade tarball file is downloaded. Upgrade is in progress ... Updated to build: routing-director-release-2.6.0.9889.g1ea61da822 Paragon Cluster upgrade is successful! Run 'request paragon health-check' command to check current system health with upgraded Paragon cluster. Please continue to primary host node to upgrade Paragon-shell and update OVA system files by: /root/epic/upgrade_paragon-shell_ova-system.shHere
primary1is the installer primary node. The upgrade command checks the health of the cluster before upgrading. If the cluster health check returns aGREENstatus, the cluster is upgraded requiring no further input. If the cluster health check returns aREDstatus, the cluster is not upgraded. If the cluster health check returns anAMBERstatus, you are prompted to choose to continue or stop the upgrade.Additional upgrade command options:
You can also use any one or more of the following command options along with the upgrade command while upgrading:
no-confirm—Usage example:request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img.or.tgz?query_string" no-confirmUse the
no-confirmoption to ignore theAMBERstatus and continue with the upgrade without being prompted. However, theno-confirmoption does not ignore aREDstatus.detach-process—Usage example:request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img.or.tgz?query_string" detach-processAs the upgrade process takes over an hour to complete, you can let the upgrade run in the background and free up the CLI screen for any other tasks. The command runs the initial health checks and then proceeds with the upgrade. Once the upgrade process starts, the process is detached and moved into the background and you are returned to the command prompt. The upgrade output is logged in the /epic/temp/upgrade.log file. To monitor the status of the upgrade process and print the output onscreen, use the
monitor start /epic/temp/upgrade.logcommand. When the upgrade process completes, a success message similar to the following is displayed on all the cluster nodes:Paragon Cluster upgrade is successful! - Run 'request paragon health-check' command to check current system health with upgraded Paragon cluster. - Please continue to primary host node to upgrade Paragon-shell and update OVA system files by: /root/epic/upgrade_paragon-shell_ova-system.sh
disk-saving—Usage example:request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img.or.tgz?query_string" disk-savingUse this option to delete the upgrade_routing-director-release-build-ID.filetype file as soon as it is unzipped from the primary node. The upgrade command downloads the upgrade file from the remote location and extracts the contents of the file at the beginning of the upgrade process. This option deletes the downloaded file as soon as it is unzipped to free up space on the primary node. This option is applicable only for .tgz files.
The advantage of using this option is that you need lesser free space for the upgrade process. The default minimum free space required is 15% of the total disk space + three times the upgrade file size. With this option you need a minimum free space of 15% of the total disk space + two times the upgrade file size.
input—Usage example:request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img.or.tgz?query_string" input input-stringUse the
inputoption to pass additional Ansible input parameters to the upgrade command. For example, if you want to enable verbose logging while upgrading, use the-voption.request paragon cluster upgrade remote url "https://juniper.software.download.site/upgrade_routing-director-release-build-ID.img-or-tgz?query_string" input "-v"
Your Routing Director installation and all the applications running on it are upgraded.
Note that, the upgrade process takes a little over an hour to complete. If you get disconnected from the VM during the upgrade process, you can periodically check the upgrade log file until you see an output similar to this:
root@primary1:~# cat /root/upgrade/upgrade.log <output snipped> … PLAY [Mark installation as complete] ******************************************* TASK 3420 [Record installation status] ***************************************** Wednesday 03 September 2025 19:35:31 +0000 (0:00:01.516) 2:15:11.072 *** changed: [10.1.2.3] PLAY RECAP ********************************************************************* 10.1.2.3 : ok=2664 changed=672 unreachable=0 failed=0 rescued=0 ignored=2 10.1.2.4 : ok=221 changed=40 unreachable=0 failed=0 rescued=0 ignored=0 10.1.2.5 : ok=221 changed=40 unreachable=0 failed=0 rescued=0 ignored=0 10.1.2.6 : ok=206 changed=37 unreachable=0 failed=0 rescued=0 ignored=0 Wednesday 03 September 2025 19:35:33 +0000 (0:00:01.826) 2:15:12.899 *** =============================================================================== user-registry : Push Docker Images from local registry to paragon registry - 627.52s jcloud/airflow2 : Install Helm Chart ---------------------------------- 230.85s Install Helm Chart ---------------------------------------------------- 196.00s delete existing install config-map - if any --------------------------- 184.01s Save installer config to configmap ------------------------------------ 173.67s Create Kafka Topics --------------------------------------------------- 133.83s user-registry : Push Helm Charts to paragon registry ------------------ 115.41s jcloud/papi : Install Helm Chart -------------------------------------- 101.72s paragon-shell-config : Load paragon-shell initial configs on master node -- 76.12s kubernetes/multi-master-rke2 : restart rke2 server on 1st master ------- 76.07s systemd ---------------------------------------------------------------- 75.46s kubernetes/addons/helper-commands : Install Pathfinder Utility scripts -- 59.76s Install Helm Chart ----------------------------------------------------- 54.64s jcloud/common : Create jcloud namespaces ------------------------------- 50.59s kubernetes/multi-master-rke2 : restart rke2 server on other master ----- 49.66s kubernetes/addons/helper-commands : Copy profiler to /opt/paragon/bin -- 48.08s kubernetes/addons/resource-reservation : Apply resource-reservation kube-cfg file -- 45.98s Wait and verify readiness for each workload ---------------------------- 42.67s kubernetes/addons/resource-reservation : patch default sa in all ns ---- 41.98s kubernetes/addons/arangodb : Wait for Arango Operator fully terminated -- 35.63s Playbook run took 0 days, 2 hours, 15 minutes, 12 seconds Application Cluster upgraded to version build: routing-director-release-2.6.0.9889.g1ea61da822 Routing Director cluster upgrade is successful on host node!
Upgrade Deployment Shell and the OVA system files.
Upgrade Deployment Shell and the OVA System Files
When your Routing Director installation and all the applications running on it are successfully upgraded, you must upgrade Deployment Shell and the OVA system files.
Type
exitto exit from the installer primary node Deployment Shell to the Linux root shell.Execute the Deployment Shell upgrade script.
root@primary1:~# bash /root/epic/upgrade_paragon-shell_ova-system.sh Upgrading paragon-shell... Updating paragon-shell for primary1...... Container paragon-shell Stopping Container paragon-shell Stopped Container paragon-shell Removing Container paragon-shell Removed paragon-shell Pulling .... <output snipped> .... primaryname update-status primary1 ok primary3 ok primary2 ok primary4 ok paragon-shell upgrade successful! Updating OVA system files... OVA system files update successful!
The Deployment Shell and the OVA system files are upgraded.
(Optional) Check the build and OVA version of your upgraded cluster from Deployment Shell.
root@primary> show deployment version ova: 20241017_1231 ova-patch: 20250903_0026_upgrade_version build: routing-director-release-2.6.0.9889.g1ea61da822 Client Version: v1.29.6 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.33.2+rke2r1
Ensure that the upgraded cluster is healthy and operational. Execute the
request deployment health-check command before you
proceed.
The Overall Cluster Status must be
GREEN.
Proceed to perform the post cluster upgrade tasks. See Post Cluster Upgrade Tasks.