Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Back Up the Etcd Database

Use this example procedure to back up the etcd database.

We provide this example procedure purely for informational purposes. See Red Hat OpenShift documentation (https://docs.openshift.com/) for the official procedure.

The use of CN2 as a CNI plug-in does not affect how you back up and restore the etcd database. Use the tools that you're most familiar with to manage the database, such as etcdctl.

  1. Get a list of the running nodes.
  2. Log in to one of the control plane nodes as root.
    You cannot simply do this through SSH because root login is disabled by default. You have to launch a debug pod and chroot into the host filesystem.
    1. Launch a debug pod on one of the control plane nodes. When you do this, you're automatically placed into a root shell of the debug pod. This example launches a debug pod on ocp1.
      The debug pod mounts the host (node) filesystem at /host, as you can see here:
    2. To change to the host filesystem as root, use the chroot command.
      By doing this, you are effectively logged in to the host node as root.
      You can verify that you're in the host filesystem by searching for the device name that was mounted as /host previously.
  3. Back up the etcd database.
    The following backs up the database to the /home/core/assets/backup directory. This directory is created as part of the cluster-backup.sh script. This script is provided as part of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. You don't have to install etcdctl. The script installs etcdctl automatically.
    Note:

    It is normal to see CNI errors in the above output.

    Two files are created by the script:
    • snapshot_<timestamp>.db - this is the etcd snapshot
    • static_kuberesources_<timestamp>.tar.gz - this contains the resources for the static pods
  4. Type exit to exit the shell and terminate the debug pod.