Skip to content

Volume Snapshots & Restore with the CSI NFS Driver in Kubernetes

Last updated on August 7, 2024

Managing persistent storage in Kubernetes is crucial for stateful applications. The CSI (Container Storage Interface) NFS driver allows Kubernetes to utilize NFS (Network File System) for persistent storage. This guide will walk you through installing the CSI NFS driver, setting up an NFS share, creating snapshots, and restoring volumes in Kubernetes.

Step 1: Install the CSI NFS Driver

First, install the CSI NFS driver using Helm.

helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm repo update
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs

Step 2: Set Up NFS Share

If you don’t already have an NFS server set up, follow these steps to configure one.

Install NFS server packages:

sudo apt update 
sudo apt install nfs-kernel-server

Create the NFS export directory:

sudo mkdir -p /srv/nfs 
sudo chown nobody:nogroup /srv/nfs 
sudo chmod 777 /srv/nfs

Configure NFS exports: Edit the /etc/exports file and add the following line:

/srv/nfs *(rw,sync,no_subtree_check,no_root_squash)

Export the NFS shares:

sudo exportfs -a

Start and enable the NFS server:

sudo systemctl restart nfs-kernel-server 
sudo systemctl enable nfs-kernel-server

Step 3: Create a StorageClass

Create a StorageClass for the NFS CSI driver. This StorageClass defines how volumes are provisioned.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: ip_address
  share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate

Apply the StorageClass using kubectl:

kubectl apply -f nfs-storageclass.yaml

Step 4: Install Snapshot CRDs

Install the necessary Custom Resource Definitions (CRDs) for snapshots.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

Step 5: Deploy Snapshot Controller

Deploy the snapshot controller and its RBAC configuration.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml

Step 6: Create VolumeSnapshotClass

Create a VolumeSnapshotClass to define how snapshots are created and managed.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: nfs-snapshot-class
driver: nfs.csi.k8s.io
deletionPolicy: Delete

Apply the VolumeSnapshotClass:

kubectl apply -f nfs-volumesnapshotclass.yaml

Step 7: Set Up RBAC for Snapshots

Configure RBAC to allow the necessary permissions for snapshot operations.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: snapshot-role
  namespace: default
rules:
- apiGroups: ["snapshot.storage.k8s.io"]
  resources: ["volumesnapshots"]
  verbs: ["create", "list", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: snapshot-rolebinding
  namespace: default
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: Role
  name: snapshot-role
  apiGroup: rbac.authorization.k8s.io

Apply the RBAC configuration:

kubectl apply -f snapshot-rbac.yaml

Step 8: Automate Snapshot Creation with CronJob

Create a CronJob to automate the creation of snapshots daily at midnight.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: snapshot-cronjob
spec:
  schedule: "0 0 * * *"  # This schedule runs the job daily at midnight.
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: create-snapshot
            image: bitnami/kubectl:latest
            command: ["/bin/sh", "-c"]
            args:
            - |
              TIMESTAMP=$(date "+%Y-%m-%d-%H-%M-%S")
              kubectl apply -f - <<EOF
              apiVersion: snapshot.storage.k8s.io/v1
              kind: VolumeSnapshot
              metadata:
                name: snapshot-data-volume-0-$TIMESTAMP
              spec:
                volumeSnapshotClassName: nfs-snapshot-class
                source:
                  persistentVolumeClaimName: data-volume-0
              ---
              apiVersion: snapshot.storage.k8s.io/v1
              kind: VolumeSnapshot
              metadata:
                name: snapshot-data-volume-1-$TIMESTAMP
              spec:
                volumeSnapshotClassName: nfs-snapshot-class
                source:
                  persistentVolumeClaimName: data-volume-1
              ---
              apiVersion: snapshot.storage.k8s.io/v1
              kind: VolumeSnapshot
              metadata:
                name: snapshot-data-volume-2-$TIMESTAMP
              spec:
                volumeSnapshotClassName: nfs-snapshot-class
                source:
                  persistentVolumeClaimName: data-volume-2
              EOF
          restartPolicy: OnFailure

Apply the CronJob:

kubectl apply -f snapshot-cronjob.yaml

Step 9: Restore Snapshots

Create new PVCs from the snapshots.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-volume-0
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: nfs-csi
  dataSource:
    name: snapshot-data-volume-0-<TIMESTAMP>
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-volume-1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: nfs-csi
  dataSource:
    name: snapshot-data-volume-1-<TIMESTAMP>
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-volume-2
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: nfs-csi
  dataSource:
    name: snapshot-data-volume-2-<TIMESTAMP>
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io

Replace <TIMESTAMP> with the actual timestamp of your snapshots.

Apply the PVCs:

kubectl apply -f restore-pvcs.yaml

By following these steps, you can efficiently manage snapshots and restore volumes using the CSI NFS driver in your Kubernetes environment. This capability enhances your ability to handle backups and disaster recovery for stateful applications.

Published inOther