For a while now, I’ve been using csi-proxmox
to provision Kubernetes PersistentVolumeClaims (PVCs) using virtual disks on my Proxmox nodes. It’s a solid setup for basic storage needs—especially if you're already deep in the Proxmox ecosystem, but I wanted more flexibility as my homelab grew.
Specifically, I was looking for a way to:
That’s when I started experimenting with CephFS. After getting it working inside my Proxmox cluster, I integrated it with Kubernetes using the csi-cephfs
driver, and now I’m gradually migrating my workloads over to it. I am starting off with a single node cluster (I know, I know. This is not the best way to handle it.), but it's getting me started on the process and I'll be moving towards a real quorum soon!
Let me walk you through what that looks like, with a real example of how I migrated data from a csi-proxmox
volume to a csi-cephfs
volume using a simple pod-based approach.
Here's a quick overview of why I’m moving my PVCs to CephFS:
That being said, most of my workloads (especially StatefulSets) still had PVCs tied to the old csi-proxmox
StorageClass.
I needed a way to safely migrate that data.
Here’s the overall flow I followed for each PVC migration:
rsync
inside the pod to copy the dataLet’s break that down with a real example.
I have the Tautulli app running in Kubernetes, and its persistent data was on a csi-proxmox
backed PVC. I wanted to move this to CephFS without losing anything or dealing with complicated downtime.
First, I created a new PVC using my CephFS-backed StorageClass. Here's a sample YAML:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tautulli-cephfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: csi-cephfs-sc
To copy data between PVCs, I created a pod that mounts both volumes and gives me an interactive shell:
apiVersion: v1
kind: Pod
metadata:
name: pvc-migrator
spec:
containers:
- name: migrator
image: ubuntu:22.04
command: ["/bin/bash", "-c", "sleep infinity"]
volumeMounts:
- name: old-pvc
mountPath: /mnt/old
- name: new-pvc
mountPath: /mnt/new
volumes:
- name: old-pvc
persistentVolumeClaim:
claimName: tautulli-data
- name: new-pvc
persistentVolumeClaim:
claimName: tautulli-cephfs
restartPolicy: Never
Then I connected to it:
kubectl exec -it pvc-migrator -- bash
And inside the container, I ran:
rsync -a /mnt/old/ /mnt/new/
This preserved file permissions, symlinks, and ownership during the migration.
After confirming the data was in place, I updated the Deployment to use tautulli-cephfs
instead of tautulli-data
. In my case, this meant updating the persistence.volumeClaimName
inside my Helm values and pushing that change through ArgoCD.
Once I verified everything was working as expected, I deleted the migration pod:
kubectl delete pod pvc-migrator
Then I cleaned up the old PVC and its corresponding Proxmox disk.
At one point, I accidentally deleted a PVC without deleting its associated PV. When I tried to recreate it, I got this error:
volume "pv-tautulli" already bound to a different claim
The fix: manually edit the PV and update the UID for the claim to the UID on the PVC, like so:
kubectl edit pv pv-tautulli
Then delete or reapply your PVC. Problem solved.
The only small thing I'll note here that was different for me - instead of applying and deleting the pods manually, I continued to push them through git and used ArgoCD to sync the changes. I did this mostly for historical purposes and so I could remember how I did the full process and in what order, but either process works great for this purpose.
Migrating from csi-proxmox
to csi-cephfs
has been a surprisingly smooth process once I nailed down this pattern. Using a simple data-copy pod made it safe and predictable, and CephFS has been a huge upgrade in flexibility.
I still have a few more workloads to migrate, but so far I’m really happy with how this is turning out.
If you’re managing your own Proxmox + Kubernetes setup and want more modern storage workflows, I definitely recommend giving CephFS a look.
You can follow my ongoing homelab experiments on GitHub, or keep an eye on my blog repo for future updates. I'm publishing all my blog posts in Markdown using Nuxt Content, so feel free to browse the source as well.