Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Back Up and Restore

SUMMARY Learn how to use etcdctl commands to back up and restore the etcd database.

We provide these example procedures purely for informational purposes. For more information on back up and restore, see the official Kubernetes documentation (https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster and https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster).

Back Up the Etcd Database

Use this example procedure to back up the etcd database.

  1. SSH into one of the control plane nodes in the cluster.
  2. Install etcdctl version 3.4.13 or later. Etcdctl is the command line tool for managing etcd.

    If the node already has etcdctl version 3.4.13 or later installed, then you can skip this step.

    Otherwise, install etcdctl:

    In this example, we move the file to the /tmp directory.
    Extract and copy the executable to /usr/local/bin.
    Verify by querying the version.
  3. Set the required ETCDCTL env variables.
    These variables are used implicitly by the etcdctl commands. The file paths listed are the default file paths. You can obtain these file paths by issuing the kubectl describe pod <etcd-pod> command.
  4. Set permissions on the certificate files.
  5. Repeat step 1 to step 4 on all the control plane nodes.
  6. Back up the etcd database.
    SSH back into one of the control plane nodes and take a snapshot of the etcd database.This takes a snapshot of the database and stores it in /tmp/etcdBackup.db.
  7. Copy the snapshot off the node and store in a safe place.

Restore the Etcd Database

Use this example procedure to restore the etcd database from a snapshot.

  1. Restore the snapshot on all the control plane nodes.
    1. SSH into one of the control plane nodes.
    2. Copy the saved snapshot to /tmp/etcdBackup.db (for example).
    3. Restore the backup.
      where <cp1-etcd-pod> is the name of the contrail-etcd pod that you're currently in and <cp1-etcd-pod-ip> is the IP address of that pod. The <cp2-etcd-pod> and <cp3-etcd-pod> refer to the other contrail-etcd pods. This creates a <cp1-etcd-pod>.etcd directory on the node.
    4. Repeat for the other control plane nodes, substituting the --name and --initial-advertise-peer-urls parameters with the respective pod name and IP address.
  2. Stop the API server on all the control plane nodes.
    1. SSH into one of the control plane nodes.
    2. Stop the API server.
    3. Repeat for the other control plane nodes.
  3. Move the restored etcd snapshot to /var/lib/etcd on all the control plane nodes.
    1. SSH into one of the control plane nodes.
    2. Move the restored etcd snapshot.
      where <restored-etcd-directory> is the .etcd directory created in step 1.
    3. Repeat for the other control plane nodes.
  4. Restore the API server on all control plane nodes.
    1. SSH into one of the control plane nodes.
    2. Restore the API server.
    3. Repeat for the other control plane nodes.
  5. Restart the kubelet on all control plane nodes.
    1. SSH into one of the control plane nodes.
    2. Restart the kubelet.
    3. Repeat for the other control plane nodes.
  6. Restart the kube-system apiserver and controller.
    Delete all the kube-apiserver and kube-controller pods.These pods will automatically restart.
  7. Restart the contrail-system apiserver and controller.
    Delete all the contrail-k8s-apiserver and contrail-k8s-controller pods.These pods will automatically restart.
  8. Restart the vrouters.
    Delete all the contrail-vrouter-masters and contrail-vrouter-nodes pods.These pods will automatically restart.
  9. Check that all pods are in running state.