Ceph CSI team is excited that it has reached a huge milestone with the release of v1.1.0!
https://github.com/ceph/ceph-csi/releases/tag/v1.1.0
Kudos to the Ceph CSI community for all the hard work to reach this critical milestone. This is our first official release ( tracked @ https://github.com/ceph/ceph-csi/issues/353 ) and it is out on 12-Jul-2019. This is a huge release with many improvements in CSI based volume provisioning by making use of the latest Ceph release ( Nautilus ) for its use in production with Kubernetes clusters. One of the main highlights of this release is ceph subvolume based volume provisioning and deletion.
Highlights of this release:
*) CephFS subvolume/manager based volume provisioning and deletion.
*) E2E test support for PVC creation, App pod mounting.etc.
*) CSI spec v1.1 support
*) Added support for kube v1.15.
*) Configuration store change from configmap to rados omap
*) Mount options support for CephFS and RBD volumes
*) Move locks to more granular locking than CPU count based
*) Rbd support for ReadWriteMany PVCs for block mode
*) Unary plugin code for ‘CephFS and RBD’ drivers.
*) Driver name updated to CSI spec standard.
*) helm chart updates.
*) sidecar updates to the latest available.
*) RBAC corrections and aggregated role addition.
*) Lock protection for create,delete volume ..etc operations
*) Added support for external snapshottter.
*) Added support for CSIDriver CRD.
*) Support matrix table availability.
*) Many linter fixes and error code fixes.
*) Removal of dead code paths.
*) StripSecretInArgs in pkg/util.
*) Migration to klog from glog
……….
Many other bug fixes, code improvements, README updates are also part of this release. The container image is tagged with “v1.1.0” and its downloadable by docker pull quay.io/cephcsi/cephcsi:v1.1.0
We have also updated the support matrix for better notification of available CSI features and its status in upstream.
https://github.com/ceph/ceph-csi#Support-Matrix
We would like to thank the Rook team for unblocking the CSI project at various stages!
We are not stopping here, but moves forward at a good pace to catch up with our next ‘feature’ rich release tracked at https://github.com/ceph/ceph-csi/issues/393. If you would like to see some features or get some bug fixes done in the next release, please help us by mentioning it in the same release tracker.
We are also kickstarting upstream bug triage call from next week, so please be part of it. More details about this call is available @ https://github.com/ceph/ceph-csi/issues/463
Happy Hacking!
PS/NOTE: This release needs the latest Ceph Nautilus cluster to support cephfs subvolume provisioning and this version of the cluster is made available if you deploy CSI with Rook Master.