“Retain” ( PV claim policy) GlusterFS Dynamically Provisioned Persistent Volumes in Kubernetes >=1.8

Since introduction, the dynamic provisioning feature of Kube storage defaulted with reclaim policy as `Delete`. It was not specific to glusterfs PVs rather common for all PVs which are dynamically provisioned in kube storage.

However from kube >= v1.8, we could specify `retain` policy in storage class! A much-needed functionality in SC for different use cases and setups!

This article describes a `how-to` from gluster PV perspective.

Lets first create a Storage class with `reclaim policy` as “Retain”.

[terminal]

[root@localhost demo]# cat glusterfs-storageclass_slow.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: “http://127.0.0.1:8081”
restuser: “admin”
secretNamespace: “default”
secretName: “heketi-secret”
volumeoptions: “features.shard enable”
reclaimPolicy: Retain

[/terminal]

Once we have above SC configuration yaml, let’s create other prerequisites for creating or provisioning glusterfs dynamic PVs. Those are nothing but a secret for restservice/heketi and a Persistent volume claim called `claim11` as we do always!

[root@localhost demo]# kubectl create -f glusterfs-secret.yaml ; kubectl create -f glusterfs-storageclass_slow.yaml;kubectl create -f glusterfs-pvc-claim11_slow.yaml
secret “heketi-secret” created
storageclass “slow” created
persistentvolumeclaim “claim11” created

Check the PV and PVC status:

[terminal]

[root@localhost demo]# kubectl get pvc; kubectl get pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim11 Pending slow 4s
No resources found.
[root@localhost demo]# kubectl get pvc; kubectl get pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim11 Bound pvc-65f16814-ce8c-11e7-8823-fcde56ff0106 3Gi RWO slow 6s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-65f16814-ce8c-11e7-8823-fcde56ff0106 3Gi RWO Retain Bound default/claim11 slow 2s
[root@localhost demo]#
[/terminal]

Awesome, GlusterFS PV is created with reclaim policy “Retain” as you can see in the above output!

Lets double confirm this from the details on PV.

[terminal]

[root@localhost demo]#
[root@localhost demo]# kubectl get pv -o yaml
apiVersion: v1
items:
– apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
Description: ‘Gluster-Internal: Dynamically provisioned PV’
gluster.org/type: file
kubernetes.io/createdby: heketi-dynamic-provisioner
pv.beta.kubernetes.io/gid: “2000”
pv.kubernetes.io/bound-by-controller: “yes”
pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
volume.beta.kubernetes.io/mount-options: auto_unmount
creationTimestamp: 2017-11-21T07:20:09Z
name: pvc-65f16814-ce8c-11e7-8823-fcde56ff0106
namespace: “”
resourceVersion: “299”
selfLink: /api/v1/persistentvolumes/pvc-65f16814-ce8c-11e7-8823-fcde56ff0106
uid: 685e6a6f-ce8c-11e7-8823-fcde56ff0106
spec:
accessModes:
– ReadOnlyMany
capacity:
storage: 3Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: claim11
namespace: default
resourceVersion: “291”
uid: 65f16814-ce8c-11e7-8823-fcde56ff0106
glusterfs:
endpoints: glusterfs-dynamic-claim11
path: vol_ce1937557d43fb481e39650137777312
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
status:
phase: Bound
kind: List
metadata:
resourceVersion: “”
selfLink: “”

[/terminal]

Delete the PVC or claim and confirm the PV exist even after PVC is deleted!

[terminal]
[root@localhost demo]# kubectl delete pvc claim11
persistentvolumeclaim “claim11” deleted
[root@localhost demo]# kubectl get pvc
No resources found.

[/terminal]
Let’s check whether we still have the PV or not.

[terminal]
[root@localhost demo]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-65f16814-ce8c-11e7-8823-fcde56ff0106 3Gi RWO Retain Released default/claim11 slow 1m
[/terminal]

Yes, we have it!

Digiprove sealCopyright secured by Digiprove © 2017-2020 Humble Chirammal