[Alpha] GlusterFS CSI ( Container Storage Interface) Driver for Container Orchestrators!

Every Container or Cloud space storage vendor wants a standard interface for an unbiased solution development which then will not require them to have a non-trivial testing matrix .

“Container Storage Interface” (CSI) is a proposed new Industry standard for cluster-wide volume plugins. CSI will enable storage vendors (SP) to develop a plugin once and have it work across a number of container orchestration (CO) systems.

The latest Kubernetes release 1.9 has rolled out an alpha implementation of the Container Storage Interface (CSI) which makes installing new volume plugins as easy as deploying a pod. It also enables third-party storage providers to develop solutions without the need to add to the core Kubernetes codebase.

This blog is about GlusterFS CSI driver which is capable of creating/deleting volumes dynamically and mount/unmount whenever there is a request. I will explain about the deployment parts later. For now, I have compiled (https://github.com/humblec/drivers/commit/452e76c623c96b7222599ea94bb7e809f03b156c) and set the deployment of Kubernetes ready for GlusterFS CSI driver in my setup.

What I have:


*) Kubernetes cluster with required feature gates enabled
*) Running CSI helpers for Kubernetes.

To demonstrate how GlusterFS CSI driver works, let us follow the same workflow of dynamically provisioned PVs starting from the creation of a storageclass. Please note the `provisioner` parameter in the below storage class file. It points to `csi` glusterfs plugin.

[terminal]
[root@localhost cluster]# cat csi-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfscsi
annotations:
storageclass.beta.kubernetes.io/is-default-class: “true”
provisioner: csi-glusterfsplugin
[root@localhost cluster]#
[/terminal]

Once the storage class is created, let us create a claim, this claim point to storageclass called `glusterfscsi`.

[terminal]
[root@localhost cluster]# cat glusterfs-pvc-claim12_fast.yaml
{
“kind”: “PersistentVolumeClaim”,
“apiVersion”: “v1”,
“metadata”: {
“name”: “claim12”,
“annotations”: {
“volume.beta.kubernetes.io/storage-class”: “glusterfscsi”
}
},
“spec”: {
“accessModes”: [
“ReadWriteMany”
],
“resources”: {
“requests”: {
“storage”: “4Gi”
}
}
}
}
[/terminal]

As soon as you make a request to create the claim, the gluster CSI driver received the request and created a PV object as you can see here:

[terminal]
[root@localhost cluster]# ./kubectl.sh get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim12 Bound kubernetes-dynamic-pvc-ad8014ec-febd-11e7-bf55-c85b7636c232 4Gi RWX glusterfscsi 35m
[root@localhost cluster]#
[/terminal]

As an excited user/admin, examine the details of PVC and PV as shown below:

[terminal]
[root@localhost kubernetes]# kubectl describe pvc
Name: claim12
Namespace: default
StorageClass: glusterfscsi
Status: Bound
Volume: kubernetes-dynamic-pvc-79eb02cd-fd17-11e7-ac3c-c85b7636c232
Labels:
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"79e518d1-fd17-11e7-ac3c-c85b7636c232","leaseDurationSeconds":15,"acquireTime":"2018-01-19T12:51:32Z","renewTime":"2018-01-19T12:51:34Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-class=glusterfscsi
volume.beta.kubernetes.io/storage-provisioner=csi-glusterfsplugin
Finalizers: []
Capacity: 4Gi
Access Modes: RWX
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 5m (x7 over 6m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi-glusterfsplugin" or manually created by system administrator
Normal Provisioning 5m csi-glusterfsplugin localhost.localdomain 79e518d1-fd17-11e7-ac3c-c85b7636c232 External provisioner is provisioning volume for claim "default/claim12"
Normal ProvisioningSucceeded 5m csi-glusterfsplugin localhost.localdomain 79e518d1-fd17-11e7-ac3c-c85b7636c232 Successfully provisioned volume kubernetes-dynamic-pvc-79eb02cd-fd17-11e7-ac3c-c85b7636c232

[/terminal]

PV:

[terminal][root@localhost cluster]# ./kubectl.sh describe pv
Name: kubernetes-dynamic-pvc-ad8014ec-febd-11e7-bf55-c85b7636c232
Labels:
Annotations: csi.volume.kubernetes.io/volume-attributes={“glusterserver”:”172.18.0.3″,”glustervol”:”vol_64d3ac458bc17bec44a919336656fbfb”}
csiProvisionerIdentity=1516547610828-8081-csi-glusterfsplugin
pv.kubernetes.io/provisioned-by=csi-glusterfsplugin
StorageClass: glusterfscsi
Status: Bound
Claim: default/claim12
Reclaim Policy: Delete
Access Modes: RWX
Capacity: 4Gi
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi-glusterfsplugin
VolumeHandle: ad817b9d-febd-11e7-96d9-c85b7636c232
ReadOnly: false
Events:
[root@localhost cluster]# [/terminal]

The above outputs show that, the PV object is created by the Gluster CSI driver!

Let us create a pod with this claim and see the mount works.

[terminal][root@localhost cluster]# cat ../demo/fedora-pod.json
{
“apiVersion”: “v1”,
“kind”: “Pod”,
“metadata”: {
“name”: “gluster”,
“labels”: {
“name”: “gluster”
}
},
“spec”: {
“containers”: [{
“name”: “gluster”,
“image”: “fedora”,
“imagePullPolicy”: “IfNotPresent”,
“volumeMounts”: [{
“mountPath”: “/mnt/gluster”,
“name”: “gluster”
}]
}],
“volumes”: [{
“name”: “gluster”,
“persistentVolumeClaim”: {
“claimName”: “claim12”
}
}]
}
}[/terminal]

Create the pod and check `mount`

[root@localhost cluster]#kubectl create -f demo/fedora-pod.json


[terminal]
[root@localhost cluster]# mount |grep gluster
172.18.0.3:vol_64d3ac458bc17bec44a919336656fbfb on /var/lib/kubelet/pods/e6476013-febd-11e7-bde6-c85b7636c232/volumes/kubernetes.io~csi/kubernetes-dynamic-pvc-ad8014ec-febd-11e7-bf55-c85b7636c232/mount type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

Cool? Isn’t it, when you delete the pod, it unmounts the volume as expected.

[root@localhost cluster]# ./kubectl.sh delete pod gluster
pod "gluster" deleted
[root@localhost cluster]# mount |grep glusterfs
[root@localhost cluster]#

[/terminal]

PS # I will write about the deployment and other details in the next blog. Happy to receive your feedback if any.

Digiprove sealCopyright secured by Digiprove © 2018-2020 Humble Chirammal