This guide walks through setting up a three-node MicroCeph cluster, mounting CephFS shares, integrating with MicroK8s using the Rook-Ceph plugin, and managing CephFS snapshots with automated backup and cleanup.
Prerequisites
- Microk8s cluster with 3 nodes
- Three Ubuntu nodes (physical or virtual) with SSH access
- Each node requires:
- At least 2 CPU cores, 4 GB RAM, and 20 GB free disk space
- Ubuntu 20.04 LTS or later
- Network connectivity between nodes
Installing MicroCeph on a Three-Node Cluster
1. Prepare the Nodes
sudo apt update && sudo apt upgrade -y
sudo apt install snapd -y
2. Install MicroCeph
sudo snap install microceph
3. Bootstrap the First Node
sudo microceph cluster bootstrap
sudo microceph.ceph status
4. Add Additional Nodes
On the first node:
sudo microceph cluster add <node2>
On each additional node:
sudo microceph cluster join <token>
Repeat for each node.
5. Configure Storage
Identify available disks:
sudo microceph disk list
Add disks as OSDs:
sudo microceph disk add /dev/sdX --wipe
Repeat on each node.
6. Verify the Cluster
sudo microceph.ceph status
Ensure all three nodes are listed and healthy.
Mounting MicroCeph-Backed CephFS Shares
1. Create Data and Metadata Pools
sudo ceph osd pool create cephfs_meta
sudo ceph osd pool create cephfs_data
2. Create CephFS Share
sudo ceph fs new newFs cephfs_meta cephfs_data
sudo ceph fs ls
Expected output:
name: newFs, metadata pool: cephfs_meta, data pools: [cephfs_data]
3. Fetch Configuration Files
cd /var/snap/microceph/current/conf
sudo ln -s $(pwd)/ceph.conf /etc/ceph/ceph.conf
sudo ln -s $(pwd)/ceph.keyring /etc/ceph/ceph.keyring
Verify:
ls -l /etc/ceph/
Expected symlinks:
ceph.conf -> /var/snap/microceph/current/conf/ceph.conf
ceph.keyring -> /var/snap/microceph/current/conf/ceph.keyring
4. Mount the Filesystem
sudo mkdir /mnt/mycephfs
sudo mount -t ceph :/ /mnt/mycephfs/ -o name=admin,fs=newFs
df -h /mnt/mycephfs
Integrating MicroCeph with MicroK8s
1. Enable Rook-Ceph
microk8s enable rook-ceph
2. Connect to External MicroCeph Cluster
microk8s connect-external-ceph
3. Create a PVC
cephfs-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: default
spec:
storageClassName: cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Apply it:
kubectl apply -f cephfs-pvc.yaml
kubectl get pvc
Managing CephFS Snapshots
1. Install Snapshot Controller
git clone https://github.com/kubernetes-csi/external-snapshotter.git
cd external-snapshotter
kubectl kustomize client/config/crd | kubectl create -f -
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
2. Create VolumeSnapshotClass
snapshotclass.yaml:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-cephfsplugin-snapclass
driver: rook-ceph.cephfs.csi.ceph.com
parameters:
clusterID: rook-ceph-external
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph-external
deletionPolicy: Delete
kubectl apply -f snapshotclass.yaml
3. Take a Snapshot
snapshot.yaml:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: cephfs-pvc-snapshot
namespace: default
spec:
volumeSnapshotClassName: csi-cephfsplugin-snapclass
source:
persistentVolumeClaimName: cephfs-pvc
kubectl apply -f snapshot.yaml
Verify CephFS Snapshot Creation
kubectl get volumesnapshotclass
kubectl get volumesnapshot
4. Restore a Snapshot
restore-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rook-pvc-restore
spec:
storageClassName: cephfs
dataSource:
name: cephfs-pvc-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl apply -f restore-pvc.yaml
Verify CephFS Restore PVC Creation
kubectl get pvc
Automate Snapshots with a CronJob
1. RBAC Configuration
snapshot-rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: snapshot-manager
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: snapshot-role
namespace: default
rules:
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["create", "list", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: snapshot-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: snapshot-manager
namespace: default
roleRef:
kind: Role
name: snapshot-role
apiGroup: rbac.authorization.k8s.io
kubectl apply -f snapshot-rbac.yaml
2. Snapshot CronJob
snapshot-cronjob.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: rook-pvc-snapshot-job
namespace: default
spec:
schedule: "0 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: snapshot-creator
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
SNAPSHOT_NAME="rook-pvc-snapshot-$(date +%Y%m%d%H%M%S)"
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: $SNAPSHOT_NAME
namespace: default
spec:
volumeSnapshotClassName: csi-cephfsplugin-snapclass
source:
persistentVolumeClaimName: cephfs-pvc
EOF
restartPolicy: OnFailure
serviceAccountName: snapshot-manager
kubectl apply -f snapshot-cronjob.yaml
3. Cleanup CronJob
cleanup-cronjob.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cleanup-rook-pvc-snapshots
namespace: default
spec:
schedule: "30 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cleanup
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
kubectl get volumesnapshot -n default -o json | \
jq '.items
| map(select(.metadata.name | test("^rook-pvc-snapshot-")))
| sort_by(.metadata.creationTimestamp)
| .[:-5]
| .[].metadata.name' -r | \
xargs -r -n1 kubectl delete volumesnapshot -n default
restartPolicy: OnFailure
serviceAccountName: snapshot-manager
kubectl apply -f cleanup-cronjob.yaml
Conclusion
You’ve now deployed a highly available MicroCeph cluster, mounted CephFS shares, integrated it with MicroK8susing Rook-Ceph, and automated snapshot management with Kubernetes CronJobs.