We are excited to announce the third major release of ceph-csi, v3.0.0 !!
The Ceph-CSI team is excited that it has reached the next milestone with the release of v3.0.0! [1]. This release is not limited to features – many critical bug fixes, documentation updates are also part of this release. This is another great release with many improvements for Ceph and CSI integration to use in production with Kubernetes/openshift clusters. Of all the many features and bug fixes here are just a few of the highlights.
New Features:
Create/Delete snapshot for RBD
Create PVC from RBD snapshot
Create PVC from RBD PVC
Add support for multiple CephFS subvolume groups
Multi Architecture docker images(amd64 and arm64)
Support ROX(ReadOnlyMany) PVC for RBD
Support ROX(ReadOnlyMany) PVC for CephFS
Enhancement:
Move to go-ceph binding from RBD CLI
Move to go-ceph binding from RADOS CLI
Add Upgrade E2E testing from 2.1.2 to 3.0.0
Update Sidecars to the latest version
Improve locking to create a parallel clone and snapshot restore
Simplify Error Handling
Update golangci-lint version in CI
Update gosec version in CI
Add support to track cephfs PVC and subvolumes
Introduce build.env for configuration of the environment variables
Update go-ceph to v0.4.0
Update E2E testing to test with latest kubernetes versions
Split out CephFS and RBD E2E tests
Integration with Centos CI to run containerized builds
Update Rook to 1.2.7 for E2E testing
Disable reflink when creating xfs filesystem for RBD
Replace klog with klog v2
Reduce RBAC for kubernetes sidecar containers
Add option to compile e2e tests in containerized
Add commitlint bot in CI
Add Stale bot to the repo
Add E2E and documentation for CephFS PVC
Update kubernetes dependency to v1.18.6
Bug Fix:
Fix issue in CephFS Volume Not found
Breaking Changes
Remove support for v1.x.x PVC
Remove support for Mimic
Snapshot Alpha is no longer supported
Lets touch upon some more of the very cool features introduced in this release:
“Snapshot and Clone functionality made available with RBD”:
Since v1.0.0 release of Ceph CSI we had RBD snapshot in place but we marked it as “Alpha” for few reasons. One of them being the snapshot support in Kubernetes upstream CSI driver was also evolving and stabilizing on the API side of things. Its not only that, we had an issue of image lockup when we try to consume or untangle parent source volume from the snapshot volume/object. So we did a revamp in this version and we are happy to say that, we have it solved and with this release . From now on, RBD snapshot should work smooth.
Its capable of:
*) Creating a snapshot from a RBD volume
*) Restoring to a new volume from the existing snapshot
*) Deletion of snapshot and parent volume objects independently.
We also enabled a cool functionality here with the Volume Cloning feature. Thats nothing but the capability of provisioning a volume from another or existing PVC source.
Dont confuse this with “Restore of a snapshot”
The main difference here is that, While you restore from a snapshot, for the new PVC, the referred `Datasource` is “Snapshot” and if you do clone operation, the “DataSource” is an existing PVC.
Multiple subvolumegroup support
When CephFS CSI driver create a volume, we default to a subvolumegroup called “csi” , We place our subvolumes in this group. I mean the backend CephFS volumes which map to the PVC in openshift or kubernetes namespace. With this release, you are allowed to specify multiple subvolume groups! This comes handy in some setups where you could have a segregation of the subvolumes for various purposes!
ROX support for both RBD and CephFS
ROX is nothing but an access mode which you could define while requesting a PVC from Kubernetes/Openshift. What that means to an end user is that, the workload get a READONLY share. In a plain dynamic provisioning workflow it does not make much sense. The reasoning is that, you are requesting a volume of size “X” and the backend driver provision it from the storage cluster, but its an “empty” volume and if you attach this to a workload as “READONLY” , in nutshell its an empty volume and you cant write the data to it!! So the question was “Whats the use of ROX volumes” ?
BUT, with the snap and clone use case there is a big use case behind it. That said, think about a scenario where you have a “VM template” which you want to consume it as READONLY image! For such use case ROX support add much value! and here we are with that support!
Mutli arch image support for Ceph CSI
Our community users want to use multi arch images for `amd64` and `arm64` ..etc. It was indeed available for last few versions, however the manifest files were not carrying this properly and it is corrected in this release and images are available in `quay.io`.
Updated sidecars & Kuberenetes dependency lift to 1.18.6
We have upgraded the CSI community provided sidecars to latest versions and also brought the Kubernetes dependency chain to `1.18.6` version. This itself bring many bug fixes, improvements, features..etc! I dont want to list them here as its really huge!
Performance improvements – Especially on go ceph bindings
We kept on improving the performance of CSI driver and especially its connection with backend cluster by making use of latest go-ceph version ( v4.0). We have seen great improvement in the backend connection workflow and overall in the life cycle of volume management.. So worth a mention here!
Code Cleanup, Better E2E, what not?
Great amount of code cleanup, E2E improvement , Documentation update …etc are part of this release!
…so on
The container image is tagged with “v3.0.0” and its downloadable by #docker pull ..
Kudos to the Ceph CSI community for all the hard work to reach this critical milestone!
The Ceph-CSI project ( https://github.com/ceph/ceph-csi/), as well as its thriving community, has continued to grow and we are happy to share that, this is our 11th release since Jul 12, 2019!!
We are not stopping here, rather marching towards v3.1.0 (https://github.com/ceph/ceph-csi/issues/1272) with some more feature enhancement as tracked in the release issue.
One of the very important feature we are targeting with v3.1.0 release is that, CephFS snapshot and Clone functionality, watch this space for more update!
Reach us at slack (https://cephcsi.slack.com) or at github: https://github.com/ceph/ceph-csi/
Happy Hacking!
[1]
Release Issue: https://github.com/ceph/ceph-csi/issues/865
ceph-csi v3.0.0 tag: https://github.com/ceph/ceph-csi/releases/tag/v3.0.0,
Release Images: https://quay.io/repository/cephcsi/cephcsi?tab=tags
—
Copyright secured by Digiprove © 2020 Humble Chirammal