This is the documentation for the latest development version of Velero. Both code and docs may be unstable, and these docs are not guaranteed to be up to date or correct. See the latest version.

Edit this page

Restic Integration

Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called restic. This support is considered beta quality. Please see the list of limitations to understand if it currently fits your use case.

Velero has always allowed you to take snapshots of persistent volumes as part of your backups if you’re using one of the supported cloud providers’ block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks). We also provide a plugin model that enables anyone to implement additional object and block storage backends, outside the main Velero repository.

We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume*. This is a new capability for Velero, not a replacement for existing functionality. If you’re running on AWS, and taking EBS snapshots as part of your regular Velero backups, there’s no need to switch to using restic. However, if you’ve been waiting for a snapshot plugin for your storage platform, or if you’re using EFS, AzureFile, NFS, emptyDir, local, or any other volume type that doesn’t have a native snapshot concept, restic might be for you.

Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable cross-volume-type data migrations. Stay tuned as this evolves!

* hostPath volumes are not supported, but the new local volume type is supported.

Setup

Prerequisites

Instructions

Ensure you’ve downloaded latest release.

To install restic, use the --use-restic flag on the velero install command. See the install overview for more details.

Please note: In RancherOS , the path is not /var/lib/kubelet/pods , rather it is /opt/rke/var/lib/kubelet/pods thereby requires modifying the restic daemonset after installing.

  hostPath:
    path: /var/lib/kubelet/pods

to

  hostPath:
    path: /opt/rke/var/lib/kubelet/pods

You’re now ready to use Velero with restic.

Back up

  1. Run the following for each pod that contains a volume to back up:

     kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
    

    where the volume names are the names of the volumes in the pod spec.

    For example, for the following pod:

     apiVersion: v1
     kind: Pod
     metadata:
       name: sample
       namespace: foo
     spec:
       containers:
       - image: k8s.gcr.io/test-webserver
         name: test-webserver
         volumeMounts:
         - name: pvc-volume
           mountPath: /volume-1
         - name: emptydir-volume
           mountPath: /volume-2
       volumes:
       - name: pvc-volume
         persistentVolumeClaim: 
           claimName: test-volume-claim
       - name: emptydir-volume
         emptyDir: {}
    

    You’d run:

     kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
    

    This annotation can also be provided in a pod template spec if you use a controller to manage your pods.

  2. Take a Velero backup:

     velero backup create NAME OPTIONS...
    
  3. When the backup completes, view information about the backups:

     velero backup describe YOUR_BACKUP_NAME
    
     kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
    

Restore

  1. Restore from your Velero backup:

     velero restore create --from-backup BACKUP_NAME OPTIONS...
    
  2. When the restore completes, view information about your pod volume restores:

     velero restore describe YOUR_RESTORE_NAME
    
     kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
    

Limitations

Customize Restore Helper Image

Velero uses a helper init container when performing a restic restore. By default, the image for this container is gcr.io/heptio-images/velero-restic-restore-helper:<VERSION>, where VERSION matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with the alternate image. The ConfigMap must look like the following:

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: restic-restore-action-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restic restore
    # item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/restic: RestoreItemAction
data:
  # "image" is the only configurable key. The value can either
  # include a tag or not; if the tag is *not* included, the
  # tag from the main Velero image will automatically be used.
  image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]

Troubleshooting

Run the following checks:

Are your Velero server and daemonset pods running?

kubectl get pods -n velero

Does your restic repository exist, and is it ready?

velero restic repo get

velero restic repo get REPO_NAME -o yaml

Are there any errors in your Velero backup/restore?

velero backup describe BACKUP_NAME
velero backup logs BACKUP_NAME

velero restore describe RESTORE_NAME
velero restore logs RESTORE_NAME

What is the status of your pod volume backups/restores?

kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml

kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml

Is there any useful information in the Velero server or daemon pod logs?

kubectl -n velero logs deploy/velero
kubectl -n velero logs DAEMON_POD_NAME

NOTE: You can increase the verbosity of the pod logs by adding --log-level=debug as an argument to the container command in the deployment/daemonset pod template spec.

How backup and restore work with restic

We introduced three custom resource definitions and associated controllers:

Backup

  1. The main Velero backup process checks each pod that it’s backing up for the annotation specifying a restic backup should be taken (backup.velero.io/backup-volumes)
  2. When found, Velero first ensures a restic repository exists for the pod’s namespace, by:
    • checking if a ResticRepository custom resource already exists
    • if not, creating a new one, and waiting for the ResticRepository controller to init/check it
  3. Velero then creates a PodVolumeBackup custom resource per volume listed in the pod annotation
  4. The main Velero process now waits for the PodVolumeBackup resources to complete or fail
  5. Meanwhile, each PodVolumeBackup is handled by the controller on the appropriate node, which:
    • has a hostPath volume mount of /var/lib/kubelet/pods to access the pod volume data
    • finds the pod volume’s subdirectory within the above volume
    • runs restic backup
    • updates the status of the custom resource to Completed or Failed
  6. As each PodVolumeBackup finishes, the main Velero process captures its restic snapshot ID and adds it as an annotation to the copy of the pod JSON that’s stored in the Velero backup. This will be used for restores, as seen in the next section.

Restore

  1. The main Velero restore process checks each pod that it’s restoring for annotations specifying a restic backup exists for a volume in the pod (snapshot.velero.io/<volume-name>)
  2. When found, Velero first ensures a restic repository exists for the pod’s namespace, by:
    • checking if a ResticRepository custom resource already exists
    • if not, creating a new one, and waiting for the ResticRepository controller to init/check it (note that in this case, the actual repository should already exist in object storage, so the Velero controller will simply check it for integrity)
  3. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more on this shortly)
  4. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
  5. Velero creates a PodVolumeRestore custom resource for each volume to be restored in the pod
  6. The main Velero process now waits for each PodVolumeRestore resource to complete or fail
  7. Meanwhile, each PodVolumeRestore is handled by the controller on the appropriate node, which:
    • has a hostPath volume mount of /var/lib/kubelet/pods to access the pod volume data
    • waits for the pod to be running the init container
    • finds the pod volume’s subdirectory within the above volume
    • runs restic restore
    • on success, writes a file into the pod volume, in a .velero subdirectory, whose name is the UID of the Velero restore that this pod volume restore is for
    • updates the status of the custom resource to Completed or Failed
  8. The init container that was added to the pod is running a process that waits until it finds a file within each restored volume, under .velero, whose name is the UID of the Velero restore being run
  9. Once all such files are found, the init container’s process terminates successfully and the pod moves on to running other init containers/the main containers.