Its been a while this feature is supported in Ceph CSI driver and many folks are making use of it. This is a great feature and a useful one in many situations. For example think about a situation where we created a PVC and attached to an application pod, be it a database pod, monitoring pod, or any application pod which eventually consumed the entire space of the attached PVC.
Deleting data from this volume is NOT an option at all at times. If deletion is not an option, what other option we have? migrating the entire data to a new bigger PVC and attach to the same POD/workload? repeat this whenever we are going out of space ? It also need bringing down the workload or pod to attach a new PVC. Also planning how much size the application is going to consume upfront may not work always. But expansion feature is to the rescue! Without disturbing the workload or attached PVC /share/volume, just expand the PVC by a command and thats it! You get new volume size reflected in your POD without any delay and that too when POD is consuming the storage and writing data to it!! Awesome, Isn’t it ? thats what Ceph Volumes provisioned by Ceph CSI driver is capable of !
Before we start on this: I would like to mention about some prerequisites here.
1) You should have the csi-resizer sidecar running in your setup. ( This would be default when you deploy it via Rook or using ceph-csi templates)
2) The Storage class which you use to provision should have few parameters set in it: ( This would be also available if you deploy it with Rook or Ceph CSI templates)
For example:
AllowVolumeExpansion: True
csi.storage.k8s.io/controller-expand-secret-name=rook-csi-cephfs-provisioner
csi.storage.k8s.io/controlnd-secret-namespace=rook-ceph
Why to write and explain everything in this article ? rather let me show you the demo of Volume Expansion feature in Ceph CSI: We have two screencasts one for CephFS and one for RBD.
CephFS volume expansion Demo:
RBD volume expansion Demo:
To summarize, both CephFS and RBD volumes can be expanded online and Ceph CSI drivers are capable of doing it!
At times, due to some reason if the resize fails:
Recovering from Failure when Expanding Volumes
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
Mark the PersistentVolume(PV) that is bound to the PersistentVolumeClaim(PVC) with Retain reclaim policy.
Delete the PVC. Since PV has Retain reclaim policy – we will not loose any data when we recreate the PVC.
Delete the claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available.
Re-create the PVC with smaller size than PV and set volumeName field of the PVC to the name of the PV. This should bind new PVC to existing PV.
Don’t forget to restore the reclaim policy of the PV
I am preparing a demo of RBD volume expansion for the next article on this space, so watch out!
Copyright secured by Digiprove © 2020 Humble Chirammal