[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’. Here is the workflow: Start your kubernetes controller manager with highlighted options:
In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’. Here is the workflow: Start your kubernetes controller manager with highlighted options:

Create a file called gluster.json in /tmp directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The resturl has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using heketi for the same.

We have to define an ENDPOINT and SERVICE. Below are the example configuration files.

ENDPOINT :
“ip” has to be filled with your gluster trusted pool IP.

SERVICE:
Please note that the Service Name is matching with ENDPOINT name

Finally we have a Persistent Volume Claim file as shown below:
NOTE: The size of the volume is mentioned as ’20G’:

Let’s start defining the endpoint, service and PVC.

Now, let’s request a claim!

Awesome! Based on the request it created a PV and BOUND to the PVClaim!!

Verify the volume exist in backend:

Let’s delete the PV claim —

It got deleted!

Verify it from backend:

We can use the Volume for app pods by referring the claim name.
Hope this is a nice feature to have !

Please let me know if you have any comments/suggestions.

Also, the patch – github.com/kubernetes/kubernetes/pull/30888 is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.