ON THIS PAGE
Backup and Restore Contrail Configuration Database
This document provides information on how to backup and restore the Contrail configuration databases—Cassandra and Zookeeper, for Contrail Networking deployed with Canonical Openstack through Juju Charms.
The backup and restore procedure must be completed for the nodes running the same Contrail Networking release. The procedure is used to backup the Contrail Networking databases only; it does not include instructions for backing up orchestration system databases.
Database backups must be consistent across all systems because the state of the Contrail database is associated with other system databases, such as OpenStack databases. Database changes associated with northbound APIs must be stopped on all the systems before performing any backup operation. For example, you might block the external VIP for northbound APIs at the load balancer level, such as HAproxy.
The following procedure was tested with Juju version 2.7 and version 2.3.7 running on Ubuntu 16.04 LTS (Xenial Xerus).
Additionally, the procedure contains an example with Juju machine numbers—1, 2 and 3. You must replace it with your Juju machine numbers.You can identify your Juju machine numbers by running the following command on the host:
juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'
Backup config database
Follow the procedure to backup config database:
All the commands are run on the host where Juju client is installed, unless stated otherwise.
db_manage.py
script is a disaster recovery
script. If any errors occur after running this script, contact Juniper
Networks support.
- Update
db_manage.py
script.for i in juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'; do juju ssh $i sudo docker exec contrail-controller curl -k https://raw.githubusercontent.com/tungstenfabric/tf-controller/master/src/config/api-server/vnc_cfg_api_server/db_manage.py --output /tmp/db_manage.py; done
- Update
db_json_exim.py
script.for i in juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'; do juju ssh $i sudo docker exec contrail-controller curl -k https://raw.githubusercontent.com/tungstenfabric/tf-controller/master/src/config/common/cfgm_common/db_json_exim.py --output /tmp/db_json_exim.py; done
Latest versions of
db_json_exim.py
script requires python future library.for i in `juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'`; do juju ssh $i sudo docker exec contrail-controller pip install future; done
- Stop Juju agents for contrail-controller application.
for i in `juju status contrail-controller | grep '^contrail-controller\/' | awk '{print $1}' | sed -e 's/^contrail-controller\///'|sed -e s/\*//`; do juju ssh contrail-controller/$i sudo systemctl stop jujud-unit-contrail-controller-$i; done
On each controller node, run
juju-status
command to confirm that agents are in the lost state.$ juju status contrail-controller
- Stop Contrail config services on all the nodes.
for i in contrail-svc-monitor contrail-dns contrail-device-manager contrail-schema contrail-api contrail-control; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl stop $i; done; done
- Verify status for contrail-controller node. It must be
in the inactive state.
for i in `juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'`; do juju ssh $i sudo docker exec contrail-controller contrail-status; done == Contrail Config == contrail-api: inactive contrail-schema: inactive contrail-svc-monitor: inactive contrail-device-manager: inactive
- Check Contrail config DB for consistency on one of the
controller nodes.
juju ssh 1 sudo docker exec contrail-controller python /tmp/db_manage.py check
- Synchronize the data by running
repair
command on the Contrail config DB.juju ssh 1 sudo docker exec contrail-controller nodetool repair
- Save database status. You may need it later to compare
with the post procedure database status.
for i in `juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'`; do juju ssh $i sudo docker exec contrail-controller nodetool status; done
- Log in to one of the controller nodes and take backup
of Contrail config DB.
You can follow either one of the following methods:
Take backup by default db_json_exim.py script.
juju ssh 1 sudo docker exec contrail-controller python /usr/lib/python2.7/dist-packages/cfgm_common/db_json_exim.py --export-to /tmp/db-dump.json
Take backup by db_json_exim.py script which you downloaded in the step 2.
juju ssh 1 sudo docker exec contrail-controller python /tmp/db_json_exim.py --export-to /tmp/db-dump.json
- Copy the database backup file from the container to the
host.
juju ssh 1 sudo docker cp contrail-controller:/tmp/db-dump.json
- Restart the Contrail config services on all the controller
nodes.
for i in contrail-control contrail-svc-monitor contrail-dns contrail-device-manager contrail-schema contrail-api; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl start $i; done; done
On each controller node, run the
contrail-status
command to confirm that services are in the active or backup state.for i in `juju status | grep -e "^contrail-.[a-z]*\/" | awk '{print $1}' | sed -e 's/\*//'`; do echo "----- $i -----"; TMP=`echo $i | sed -e 's/\/.*//'`; juju ssh $i sudo docker exec ${TMP} contrail-status; done
- Restart the Juju agents for contrail-controller application.
for i in `juju status contrail-controller | grep '^contrail-controller\/' | awk '{print $1}' | sed -e 's/^contrail-controller\///'`; do juju ssh contrail-controller/$i sudo systemctl start jujud-unit-contrail-controller-$i; done
Run the
juju status
command from a machine where Juju client is configured. Confirm that Juju agents are in the active state. - Verify the db dump json file for logical structure. Make
sure it’s not empty.
Node 1 contains db dump.
juju ssh 1 sudo docker exec contrail-controller cat /tmp/db-dump.json | jq .
Verify the db dump file contains the correct configuration for UUIDs and VMs’ IP addresses for your environment.
juju ssh 1 sudo cat /tmp/db-dump.json | jq . |grep \”ref:virtual_machine: juju ssh 1 sudo cat /tmp/db-dump.json | jq . |grep __FEW__OF_IPs__
Note:If there are no VMs loaded on the environment, the above commands will not show any output.
Restore config database
Follow the procedure to restore config database:
- Stop Juju agents for contrail-controller, contrail-analytics
and contrail-analyticsdb applications.
for i in `juju status contrail-controller | grep '^contrail-controller\/' | awk '{print $1}' | sed -e 's/^contrail-controller\///;s/\*//'`; do juju ssh contrail-controller/$i sudo systemctl stop jujud-unit-contrail-controller-$i jujud-unit-contrail-analytics-$i jujud-unit-contrail-analyticsdb-$i; done
- Stop Contrail services on all the controller nodes.
for i in contrail-control contrail-svc-monitor contrail-dns contrail-device-manager contrail-schema contrail-api contrail-config-nodemgr contrail-control-nodemgr contrail-database; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl stop $i; done; done for i in contrail-topology contrail-analytics-nodemgr contrail-snmp-collector contrail-alarm-gen; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analytics systemctl stop $i; done; done for i in datastax-agent confluent-kafka; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analyticsdb systemctl stop $i; done; done for i in contrail-query-engine contrail-collector contrail-analytics-api redis-server; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analytics systemctl stop $i; done; done
- Run the
contrail-status
command on each controller nodeto confirm that services are in the inactive state.for i in `juju status | grep -e "^contrail-.[a-z]*\/" | awk '{print $1}' | sed -e 's/\*//'`; do echo "----- $i -----"; TMP=`echo $i | sed -e 's/\/.*//'`; juju ssh $i sudo docker exec ${TMP} contrail-status; done
- Take backup of the Zookeeper data directory on
all the controllers.
for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl stop zookeeper; done for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller tar -cvzf /tmp/backup_configdatabase_config_zookeeper.tgz /var/lib/zookeeper/version-2; done for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl start zookeeper; done
- Clean
the current data from one of the Zookeeper instances using
rmr
command.for i in `juju ssh 1 sudo docker exec contrail-controller /usr/share/zookeeper/bin/zkCli.sh ls / | grep "^\[" | sed -e 's/\[//;s/\]//;s/,//g;s/\r//'`; do juju ssh 1 sudo docker exec contrail-controller /usr/share/zookeeper/bin/zkCli.sh rmr /$i; done
- Stop Zookeeper services on all the controllers.
for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl stop zookeeper; done
- Clean the Zookeeper data directory contents from all the
controllers.
for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller "sh -c 'rm -rvf /var/lib/zookeeper/version-2/*'"; done
- Backup the Cassandra data directory from all the controllers.
for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller tar -cvzf /tmp/backup_configdatabase_config_cassandra.tgz /var/lib/cassandra; done
- Clean the Cassandra data directory
contents from all the controllers.
for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller "sh -c 'rm -rvf /var/lib/cassandra/data/*'"; done for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller "sh -c 'rm -rvf /var/lib/cassandra/commitlog/*'"; done
After running the above commands, the old password is erased.
- Modify Cassandra configuration on each controller, one
at a time, to reset the password.
Edit the authenticator variable in the /etc/cassandra/cassandra.yaml file.
juju ssh <node> sudo docker exec -it contrail-controller vim /etc/cassandra/cassandra.yaml
Replace authenticator: PasswordAuthenticator with authenticator: AllowAllAuthenticator.
- Verify that no old Contrail services like
db *
scripts are running. If you find any old services, kill them.Run the following command on Contrail nodes outside the docker containers.
root:~# ps -fe |grep -i contrail | grep -v docker root 2305 2287 0 12:41 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 5168 5150 0 12:43 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 5871 5854 0 12:43 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 13528 13511 0 12:47 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 16826 16807 0 11:58 ? 00:00:13 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 29493 29473 0 12:56 ? 00:00:10 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 30810 30784 0 12:06 ? 00:00:12 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf --debug --verbose root 32675 32658 0 12:07 ? 00:00:12 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 49265 88219 0 17:48 pts/3 00:00:00 grep --color=auto -i contrail root 60141 60124 0 12:23 ? 00:00:12 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 60452 60435 0 12:23 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 61231 54435 0 15:48 pts/1 00:00:00 less /etc/contrail/contrail-api.conf root 63507 63489 0 12:25 ? 00:00:11 python /usr/lib/python2.7/dist-packages/cfgm_common/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 67126 67109 0 12:27 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 80449 80431 0 12:35 ? 00:00:12 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf root 85457 7860 0 16:54 ? 00:00:00 /bin/sh -c contrail-api root 85458 85457 0 16:54 ? 00:00:04 /usr/bin/python /usr/bin/contrail-api root 86585 86567 0 12:38 ? 00:00:11 python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /etc/contrail/contrail-api-dbrestore.conf
- Restart Contrail-Database and Zookeeper service
on all the controllers.
for i in contrail-database zookeeper; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl start $i; done; done
- Verify the status of Zookeeper service.
juju ssh 1 sudo docker exec contrail-controller usr/share/zookeeper/bin/zkCli.sh ls /
- Verify the status of Cassandra service.
for i in `juju status contrail-controller | grep "^contrail-controller\/" | awk '{print $4}'`; do juju ssh $i sudo docker exec contrail-controller nodetool status; done
root@(controller):/# nodetool status Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 100.x.x.x 1.15 MiB 256 100.0% eeb0f764-xxxx-4ca6-xxxx-84829624d588 rack1 UN 100.x.x.x 1.15 MiB 256 100.0% d6cf381c-xxxx-4208-xxxx-5916f09da6a2 rack1 UN 100.x.x.x 1.15 MiB 256 100.0% ffee7451-xxxx-4058-xxxx-5efe9f1286f1 rack1
For details on
nodetool status
command, see https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/tools/toolsStatus.html. - Copy the config DB backup.
for j in 1 2 3; do juju ssh $j sudo docker cp contrail-controller:/tmp/backup_configdatabase_config_zookeeper.tgz .; done for j in 1 2 3; do juju ssh $j sudo docker cp contrail-controller:/tmp/backup_configdatabase_config_cassandra.tgz .; done
- Restore config DB.
- Prepare temporary contrail-api.conf file for db restoration.
juju ssh 1 sudo docker exec contrail-controller cp /etc/contrail/contrail-api.conf /tmp/contrail-api-dbrestore.conf
- Modify cassandra_password and cassandra_user in the contrail-api.conf file.juju ssh 1 sudo docker exec -it contrail-controller vim /tmp/contrail-api-dbrestore.conf
[CASSANDRA] cassandra_password = cassandra cassandra_user = cassandra
- Import database
from /tmp/db-dump/ db-dump.json file.
You can follow any one of the following methods:
Import database by default db_json_exim.py script.
juju ssh 1 sudo docker exec contrail-controller python /usr/lib/python2.7/dist-packages/cfgm_common/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /tmp/contrail-api-dbrestore.conf
Import database by downloaded db_json_exim.py script.
juju ssh 1 sudo docker exec contrail-controller python /tmp/db_json_exim.py --import-from /tmp/db-dump.json --api-conf /tmp/contrail-api-dbrestore.conf
If any error occurs, repeat the procedure to restore config database starting from step 5.
- Prepare temporary contrail-api.conf file for db restoration.
- Synchronize the Cassandra data between nodes.
juju ssh 1 sudo docker exec contrail-controller nodetool status juju ssh 1 sudo docker exec contrail-controller nodetool repair
- Modify Cassandra configuration on each controller, one
at a time, to reset the password.
Edit the authenticator variable in the /etc/cassandra/cassandra.yaml file.
juju ssh <node> sudo docker exec contrail-controller systemctl restart contrail-databasejuju ssh <node> sudo docker exec -it contrail-controller vim /etc/cassandra/cassandra.yaml
Replace authenticator: AllowAllAuthenticator with authenticator: PasswordAuthenticator.
- Create Contrail user on any of the controller nodes.
root@(controller):/tmp/taj# cqlsh 100.x.108.1 9041 -u cassandra -p cassandra Connected to ContrailConfigDB at 100.x.108.1:9041. [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cassandra@cqlsh> list roles; role | super | login | options -----------+-------+-------+--------- cassandra | True | True | {} (1 rows) cassandra@cqlsh> CREATE USER IF NOT EXISTS controller WITH PASSWORD '00108521aaa2410aaa44da5dd5a863a3' AND SUPERUSER = true; cassandra@cqlsh> list roles; role | super | login | options ------------+-------+-------+--------- cassandra | True | True | {} controller | True | True | {} CASSANDRA_ADDR=`juju ssh contrail-controller/0 sudo docker exec contrail-controller ss -l | grep 9041 | awk '{print $5}' | sed -e 's/:/ /'` CASSANDRA_PASS=`juju ssh contrail-controller/0 sudo docker exec contrail-controller cat /etc/contrail/contrail-api.conf | grep cassandra_password | sed -e 's/^.*=[ ]*//'| sed -e 's/\r//g'` CQL_QUERY="CREATE ROLE controller with SUPERUSER = true AND LOGIN = true and PASSWORD = '${CASSANDRA_PASS}';" DB_QUERY="cqlsh ${CASSANDRA_ADDR} -u cassandra -p cassandra -e \"${CQL_QUERY}\"" juju ssh contrail-controller/0 "sudo docker exec contrail-controller ${DB_QUERY}"
- Verify if Contrail user is available on other controller nodes.
CASSANDRA_ADDR=`juju ssh contrail-controller/0 sudo docker exec contrail-controller ss -l | grep 9041 | awk '{print $5}' | sed -e 's/:/ /'` CASSANDRA_PASS=`juju ssh contrail-controller/0 sudo docker exec contrail-controller cat /etc/contrail/contrail-api.conf | grep cassandra_password | sed -e 's/^.*=[ ]*//'| sed -e 's/\r//g'` CQL_QUERY="list roles;" DB_QUERY="cqlsh ${CASSANDRA_ADDR} -u cassandra -p cassandra -e '${CQL_QUERY}'" juju ssh contrail-controller/0 "sudo docker exec contrail-controller ${DB_QUERY}"
If you don’t see Contrail user created on these nodes, check replication factor for system_auth keyspace on all the controller nodes.
- Check replication
factor by one of the following methods:
Using
nodetool
command.juju ssh 1 sudo docker exec contrail-controller nodetool status
The output must show that each node owns 100% of tokens and partitions.
Querying Cassandra db.
CASSANDRA_ADDR=`juju ssh contrail-controller/0 sudo docker exec contrail-controller ss -l | grep 9041 | awk '{print $5}' | sed -e 's/:/ /'` CASSANDRA_PASS=`juju ssh contrail-controller/0 sudo docker exec contrail-controller cat /etc/contrail/contrail-api.conf | grep cassandra_password | sed -e 's/^.*=[ ]*//'| sed -e 's/\r//g'` CQL_QUERY="select * from system_schema.keyspaces;" DB_QUERY="cqlsh ${CASSANDRA_ADDR} -u controller -p ${CASSANDRA_PASS} -e '${CQL_QUERY}'" juju ssh contrail-controller/0 "sudo docker exec contrail-controller ${DB_QUERY}" deployer@infra1:~$ juju ssh contrail-controller/0 "sudo docker exec contrail-controller ${DB_QUERY}" keyspace_name | durable_writes | replication ----------------------+----------------+------------------------------------------------------------------------------------- system_auth | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_schema | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} svc_monitor_keyspace | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} to_bgp_keyspace | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_distributed | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} config_db_uuid | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} dm_keyspace | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_traces | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'} useragent | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} (10 rows)
The system_auth parameter must have replication_factor of 3.
If the replication_factor is not set to 3, run the following commands:
CQL_QUERY="ALTER KEYSPACE system_auth WITH replication = {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'};" DB_QUERY="cqlsh ${CASSANDRA_ADDR} -u controller -p ${CASSANDRA_PASS} -e \"${CQL_QUERY}\"" juju ssh contrail-controller/0 "sudo docker exec contrail-controller ${DB_QUERY}"
- Restart Contrail services on all the controller nodes.
for i in contrail-svc-monitor contrail-dns contrail-device-manager contrail-schema contrail-api contrail-config-nodemgr contrail-control-nodemgr contrail-control; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-controller systemctl start $i; done; done for i in contrail-topology contrail-analytics-nodemgr contrail-snmp-collector contrail-alarm-gen; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analytics systemctl start $i; done; done for i in datastax-agent confluent-kafka; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analyticsdb systemctl start $i; done; done for i in contrail-collector contrail-analytics-api redis-server contrail-query-engine; do for j in 1 2 3; do juju ssh $j sudo docker exec contrail-analytics systemctl start $i; done; done
- On each controller node, enter the
contrail-status
command to confirm that services are in the active or backup state.for i in `juju status | grep -e "^contrail-.[a-z]*\/" | awk '{print $1}' | sed -e 's/\*//'`; do echo "----- $i -----"; TMP=`echo $i | sed -e 's/\/.*//'`; juju ssh $i sudo docker exec ${TMP} contrail-status; done
- Restart Juju agents for contrail-controller, contrail-analytics
and contrail-analyticsdb applications.
for i in `juju status contrail-controller | grep '^contrail-controller\/' | awk '{print $1}' | sed -e 's/^contrail-controller\///'`; do juju ssh contrail-controller/$i sudo systemctl start jujud-unit-contrail-controller-$i jujud-unit-contrail-analytics-$i jujud-unit-contrail-analyticsdb-$i; done
- Check Zookeeper
status.
juju ssh 1 sudo docker exec contrail-controller /usr/share/zookeeper/bin/zkCli.sh ls /
- Check the log files on all the controller nodes for any errors.
- Check the database using
db_manage.py
script.juju ssh 1 sudo docker exec contrail-controller python /tmp/db_manage.py check root(controller):~# python db_manage.py check 2020-06-15 20:25:29,714 INFO: (v1.31) Checker check_zk_mode_and_node_count: Success 2020-06-15 20:25:30,095 INFO: (v1.31) Checker check_cassandra_keyspace_replication: Success 2020-06-15 20:25:31,025 INFO: (v1.31) Checker check_obj_mandatory_fields: Success 2020-06-15 20:25:32,537 INFO: (v1.31) Checker check_orphan_resources: Success 2020-06-15 20:25:33,963 INFO: (v1.31) Checker check_fq_name_uuid_match: Success 2020-06-15 20:25:33,963 WARNING: Be careful, that check can return false positive errors if stale FQ names and stale resources were not cleaned before. Run at least commands 'clean_obj_missing_mandatory_fields', 'clean_orphan_resources' and 'clean_stale_fq_names' before. 2020-06-15 20:25:34,707 INFO: (v1.31) Checker check_duplicate_fq_name: Success 2020-06-15 20:25:34,776 INFO: (v1.31) Checker check_route_targets_routing_instance_backrefs: Success 2020-06-15 20:25:35,384 INFO: (v1.31) Checker check_subnet_uuid: Success 2020-06-15 20:25:35,830 INFO: (v1.31) Checker check_subnet_addr_alloc: Success 2020-06-15 20:25:36,216 INFO: (v1.31) Checker check_route_targets_id: Success 2020-06-15 20:25:36,261 INFO: (v1.31) Checker check_virtual_networks_id: Success 2020-06-15 20:25:36,315 INFO: (v1.31) Checker check_security_groups_id: Success