Restore Routing Director
This topic describes the restore functionality available for Routing Director.
You can use the restore functionality available in Deployment Shell to restore your backed-up Routing Director deployment cluster and application configuration data.
Restore Using Deployment Shell
To restore your Routing Director configuration from a specific backup configuration folder.
Caveats of the Restore Process
-
When you back up a cluster and restore it, use the same VIP addresses as the cluster that was backed up. If the restored cluster uses a new set of VIP addresses, you must change the VIP addresses configured on the devices being managed by the Routing Director deployment cluster.
-
When you perform the restore operation, the network configuration is returned to the configuration present in the backup folder. From the time the backup was taken, if the network configuration has changed due to new devices being onboarded or new service orders being executed, the network configuration in Routing Director might be different from the actual network state. To ensure that the network configuration in Routing Director and the actual network state have minimal mismatch post a restore operation, we recommend that you take regular periodic backups or specific backups after every network intent change.
-
You cannot restore data from a release different from the current installed release of Routing Director.
-
Since a backup does not store the certificates and infrastructure services configurations, that information must be kept unchanged during restoration.
-
Resources allocated to the network won’t be preserved after a restore and you must ensure that you release the allocated resources during the window between taking a backup and performing a restore.
-
Performing a restore operation requires a maintenance window. You must expect that all functionality, including access to the GUI, is unavailable during this time frame.
Restore TimescaleDB Pod Failure Scenario
If the restore process fails, due to an error of the TimescaleDB pod, you must manually copy the TimescaleDB backup file to the correct node VM.
The TimescaleDB backup file is stored on the node where TimescaleDB pod is running. In some cases, if that pod is moved to another node, the restore functionality will fail. For example:
root@Primary1> request deployment restore backup-id 20260317-147334 Current EOP version:routing-director-release-2.8.0.10975.ge4a31d62bf Backup EOP version:routing-director-release-2.8.0.10975.ge4a31d62bf Backup file not found in pod, searching cluster nodes... error: Failed to restore the backup 20260317-147334 error: TimescaleDB backup file found on node 'vm1' but timescaledb pod is running on node 'vm2'. Please copy /export/timescaledb_backup/paa_metrics_db.dump from node 'vm1' to node 'vm2' and retry.
In such a case, you must manually copy the TimescaleDB pod backup file, /export/timescaledb_backup/paa_metrics_db.dump, from the node on which it has moved to the /export/paragon-shell/backup location on the node from which you want to restore Routing Director.
Note that, the backup file needs to be copied just once. Once the file is copied to the correct path, and you retry the restore function, the issue must not repeat.