“dep: WARNING: Unknown field in manifest: prune “

Did you get the above error when you ran “dep ensure” or any similar command when using go language dependency tool- “dep”? ( https://github.com/golang/dep/). If yes, just look at the version of “dep” in your system. [root@localhost ]# dep version dep: version : v0.3.1 build date : 2017-09-19 git hash : 83789e2 go version : …

Read more

[GlusterFS dynamic Provisioner] Custom volume name support for dynamically provisioned GlusterFS PVs in Kubernetes/Openshift

Why custom volume name for dynamically provisioned PVs? I asked the same question to community users who requested this feature and the answer was, it helps a lot to filter the gluster volume names which are serving as persistent volumes. True, if there are any number of volumes say ~1000 volumes in your cluster figuring …

Read more

“Retain” ( PV claim policy) GlusterFS Dynamically Provisioned Persistent Volumes in Kubernetes >=1.8

Since introduction, the dynamic provisioning feature of Kube storage defaulted with reclaim policy as `Delete`. It was not specific to glusterfs PVs rather common for all PVs which are dynamically provisioned in kube storage. However from kube >= v1.8, we could specify `retain` policy in storage class! A much-needed functionality in SC for different use …

Read more

[GlusterFS Dynamic Provisioner] Set GlusterFS Volume Options via StorageClass parameter

Much awaited RFE/Enhancement to the GlusterFS provisioner in Kubernetes !! Its been long time kubernetes/Openshift users are looking for a feature/functionality to enable various GlusterFS volume options via storage class. We discussed on few methods to implement this in Heketi. One thought was to wrap options under ‘classified’ strings, for eg# when someone asked for …

Read more

[2020 updated] ISCSI multipath support in Kubernetes (v1.6) or Openshift.

Multipathing is an important feature in most of the storage setups, so for ISCSI based storage systems.
In most of the ISCSI based storages, multiple paths can be defined as the same IQN shared with more than one portal IPs.
Even if one of the network interface/portal is down, the share/target can be accessed via other active interfaces/portals.
This is indeed a good feature considering I/O from ISCSI initiator to Target and high availability of data path.
However the ISCSI plugin in Kubernetes was not capable of making use of multipath feature and it
was always just one path configured by default in Kubernetes. If that path goes down, the target can not be accessed.

Recently I added multipath support to ISCSI kubernetes plugin with this Pull Request.

With this functionality, a Kubernetes user/admin can specify the other Target Portal IPs in a new field called `portals` in ISCSI volume. That’s the only change required from admin side!!. If there are multiple portals, admin can mention these additional target portals in the `portals` field as shown below.

The new structure will look like this.

[terminal]
iscsi:
targetPortal: 10.0.2.15:3260
+ portals: [‘10.0.2.16:3260’, ‘10.0.2.17:3260’]
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
lun: 0
fsType: ext4
readOnly: true
[/terminal]

If you are directly using the above volume definition in POD spec, your pod spec may look like this.

[terminal]

apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
– name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
– mountPath: “/mnt/iscsipd”
name: iscsipd-rw
volumes:
– name: iscsipd-rw
iscsi:
targetPortal: 10.0.2.15:3260
portals: [‘10.0.2.16:3260’, ‘10.0.2.17:3260’, ‘10.0.2.18:3260′]
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
readOnly: true

[/terminal]

Once the pod is up and running, you could check and verify below outputs from the Kubernetes Host where the pod is running:

[terminal]
# multipath -ll
mpatha (360014059bafbe58ba644b2889c34903f) dm-2 LIO-ORG ,disk01
size=20G features=’0′ hwhandler=’0′ wp=rw
|-+- policy=’service-time 0′ prio=1 status=active
| – 44:0:0:0 sdb 8:16 active ready running
-+- policy=’service-time 0′ prio=1 status=enabled
`- 42:0:0:0 sdc 8:32 active ready running
-+- policy=’service-time 0′ prio=1 status=enabled
`- 46:0:0:0 sdd 8:48 active ready running
-+- policy=’service-time 0’ prio=1 status=enabled
`- 48:0:0:0 sde 8:64 active ready running
[/terminal]

ISCSI session looks like below:

[terminal]
# iscsiadm -m session
tcp: [10] 10.0.2.15:3260,1 iqn.2016-04.test.com:storage.target00 (non-flash)
tcp: [12] 10.0.2.16:3260,1 iqn.2016-04.test.com:storage.target00 (non-flash)
tcp: [14] 10.0.2.17:3260,1 iqn.2016-04.test.com:storage.target00 (non-flash)
tcp: [16] 10.0.2.18:3260,1 iqn.2016-04.test.com:storage.target00 (non-flash)
[/terminal]

The device paths:

[terminal]
# ll /dev/disk/by-path/
lrwxrwxrwx. 1 root root 9 Feb 16 15:58 ip-10.0.2.15:3260-iscsi-iqn.2016-04.test.com:storage.target00-lun-0 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Feb 16 15:41 ip-10.0.2.16:3260-iscsi-iqn.2016-04.test.com:storage.target00-lun-0 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Feb 16 15:58 ip-10.0.2.17:3260-iscsi-iqn.2016-04.test.com:storage.target00-lun-0 -> ../../sdd
lrwxrwxrwx. 1 root root 9 Feb 16 15:41 ip-10.0.2.18:3260-iscsi-iqn.2016-04.test.com:storage.target00-lun-0 -> ../../sde
[/terminal]

I believe, this is a nice feature added to Kubernetes ISCSI storage. Please let me know your comments/feedback/suggestions on this.

[Gluster Dynamic Provisioner ] “glusterfs: failed to get endpoints..” , Do I need to create an endpont/service ?

Let me give more details about the endpoints or service wrt GlusterFS plugin in Kubernetes or Openshift. GlusterFS plugin is designed in such a way that it need a mandatory parameter ‘endpoint’ in its spec. Endpoint is same instance of

api.endpoint in kube/openshift, ie it need IP addresses. In gluster plugin, we carry IP address of gluster nodes in the endpoint. When we manually create a PV we also need to create an endpoint and a headless service for the PV. I could call this state as 'static provisioning'. This is tedious, as admin want to fetch the nodes and then create endpoints and keep it for the developer/user. We also heard same concern from community users about the difficulty of creating endpoints in each namespace where we want to run the app pods which use the gluster PVs. At the same time, there was some user reports where they liked the isolation it brings. 

We tried to avoid this dependency of endpoints and thought about different designs to overcome this. It was bit difficult due to reasons like backward compatibility, security concerns..etc. But when we introduced dynamic provisioning of Gluster PVs which is available from Kube 1.4 version or from Openshift 3.4 version, this situation has changed. It is no longer a pain for admin to create

an endpoint/service based on the volume which just got created. The dynamic provisioner will create the endpoint and service automatically based on the cluster where the volume is created. 

The entire workflow has been made easy from user/admin point of view. The endpoint and service will be created in PVC namespace. If a user want to make use of this PVC in his application pod, the endpoint/service has to be in this namespace which is available with dynamic provisioning without any extra effort. In short, user/developer dont have to worry about the PV which should have the mention of endpoint.

I hope this helps.

Please let me know if you need more details on this or any comments on this..

Support for “Volume Types” option in Kubernetes GlusterFS dynamic provisioner.

Till now, there was no option to specify various volume types and specifications of dynamically provisioned volumes in Kubernetes or Openshift. The main reason for that being we always recommend to have a ‘replica 3′ volume whenever it provision.

However there were requests from users of GlusterFS dynamic provisioner to have this functionality for various use cases, for example, someone want to setup a quick demo by having a replica 1 or replica 2 volume in a small cluster. Sometimes they want to create an EC volume via storage class…etc.

This functionality has been added some time back to Kubernetes upstream and provide the choice to select volume type in kubernetes/Openshift via Storage Class parameter.

This has been added in below format to the gluster plugins’ storage class.

[terminal]

`volumetype` : The volume type and it’s parameters can be configured with this optional value. If the volume type is not mentioned, it’s up to the provisioner to decide the volume type.

For example:
‘Replica volume’:
`volumetype: replicate:3` where ‘3’ is replica count.

‘Disperse/EC volume’:
`volumetype: disperse:4:2` where ‘4’ is data and ‘2’ is the redundancy count.

‘Distribute volume’:
`volumetype: none`

[/terminal]

Based on above mention, the volumes will be created in the trusted storage pool .

The storage class file will look like this:

[terminal]
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/glusterfs
parameters:
resturl: “http://127.0.0.1:8081”
restuser: “admin”
secretNamespace: “default”
secretName: “heketi-secret”
volumetype: “replicate:2”
[/terminal]

Please let me know if you have any comments/suggestions/feedback about this functionality or if you would like to see any other enhancements to Gluster Dynamic Provisioner in Kubernetes/Openshift.

“libgfapi-python” is now available as Fedora rpm. ( python-glusterfs-api)

I wanted to see this as distro package for long time, but it happened very recently. Eventhough there had a package review request, I couldnt follow up and get it completed. There was few other thoughts on this which also caused the delay. Any way, with the help of kaleb Keithely and Prasanth Pai it is now available in Fedora !.

There were many users wanted this rpm/package in distributions like fedora to make use of libgfapi python bindings and to become consumers of this api client.

[root@dhcp35-111 ]# yum install python-glusterfs-api
Redirecting to '/usr/bin/dnf install python-glusterfs-api' (see 'man yum2dnf')

Last metadata expiration check: 0:12:02 ago on Mon Jan 30 00:57:52 2017.
Dependencies resolved.
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
python2-glusterfs-api noarch 1.1-2.fc24 updates 48 k

Transaction Summary
==============================================================================================================================================================================================
Install 1 Package

Total download size: 48 k
Installed size: 182 k
Is this ok [y/N]: y
Downloading Packages:
python2-glusterfs-api-1.1-2.fc24.noarch.rpm 13 kB/s | 48 kB 00:03
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 8.8 kB/s | 48 kB 00:05
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing : python2-glusterfs-api-1.1-2.fc24.noarch 1/1
Verifying : python2-glusterfs-api-1.1-2.fc24.noarch 1/1

Installed:
python2-glusterfs-api.noarch 1.1-2.fc24

Complete!
[root@dhcp35-111 ]# rpm -ql python-glusterfs-api
package python-glusterfs-api is not installed
[root@dhcp35-111 ]# rpm -ql python2-glusterfs-api
/usr/lib/python2.7/site-packages/gfapi-1.1-py2.7.egg-info
/usr/lib/python2.7/site-packages/gfapi-1.1-py2.7.egg-info/PKG-INFO
/usr/lib/python2.7/site-packages/gfapi-1.1-py2.7.egg-info/SOURCES.txt
/usr/lib/python2.7/site-packages/gfapi-1.1-py2.7.egg-info/dependency_links.txt
/usr/lib/python2.7/site-packages/gfapi-1.1-py2.7.egg-info/top_level.txt
/usr/lib/python2.7/site-packages/gluster
/usr/lib/python2.7/site-packages/gluster/gfapi
/usr/lib/python2.7/site-packages/gluster/gfapi/__init__.py
/usr/lib/python2.7/site-packages/gluster/gfapi/__init__.pyc
/usr/lib/python2.7/site-packages/gluster/gfapi/__init__.pyo
/usr/lib/python2.7/site-packages/gluster/gfapi/api.py
/usr/lib/python2.7/site-packages/gluster/gfapi/api.pyc
/usr/lib/python2.7/site-packages/gluster/gfapi/api.pyo
/usr/lib/python2.7/site-packages/gluster/gfapi/exceptions.py
/usr/lib/python2.7/site-packages/gluster/gfapi/exceptions.pyc
/usr/lib/python2.7/site-packages/gluster/gfapi/exceptions.pyo
/usr/lib/python2.7/site-packages/gluster/gfapi/gfapi.py
/usr/lib/python2.7/site-packages/gluster/gfapi/gfapi.pyc
/usr/lib/python2.7/site-packages/gluster/gfapi/gfapi.pyo
/usr/lib/python2.7/site-packages/gluster/gfapi/utils.py
/usr/lib/python2.7/site-packages/gluster/gfapi/utils.pyc
/usr/lib/python2.7/site-packages/gluster/gfapi/utils.pyo
/usr/share/doc/python2-glusterfs-api
/usr/share/doc/python2-glusterfs-api/README.rst
/usr/share/licenses/python2-glusterfs-api
/usr/share/licenses/python2-glusterfs-api/COPYING-GPLV2
/usr/share/licenses/python2-glusterfs-api/COPYING-LGPLV3

[root@dhcp35-111 ]# rpm -qi python2-glusterfs-api
Name : python2-glusterfs-api
Version : 1.1
Release : 2.fc24
Architecture: noarch
Install Date: Mon 30 Jan 2017 01:10:04 AM IST
Group : System Environment/Libraries
Size : 186261
License : GPLv2 or LGPLv3+
Signature : RSA/SHA256, Thu 19 Jan 2017 08:10:04 PM IST, Key ID 73bde98381b46521
Source RPM : python-glusterfs-api-1.1-2.fc24.src.rpm
Build Date : Thu 19 Jan 2017 08:03:32 PM IST
Build Host : buildvm-07.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager : Fedora Project
Vendor : Fedora Project
URL : https://github.com/gluster/libgfapi-python
Summary : Python2 bindings for GlusterFS libgfapi
Description :
libgfapi is a library that allows applications to natively access
GlusterFS volumes. This package contains python bindings to libgfapi.

See http://libgfapi-python.rtfd.io/ for more details.

Please give a try and let us know your feedback!

New features ( GID support, Cluster support ) added to GlusterFS dynamic provisioner in Kubernetes.

Always there are asks for RFEs and we are bringing it into GlusterFS dynamic provisioner one by one!

I would like to introduce 2 new features available now in upstream Kube tree which enhance the functionality of GlusterFS provisioner.

1) The GID ( Group ID ) support for dynamically provisioned volume
2) The support for specifying the “Cluster” from which an admin want to provision the volume.

Lets look into these options in detail.

The GID support:Till now, the gluster dynamic provisioner was creating volumes with USERID and GROUPID as root (UID 0 : GID 0 ). So, the access to the volume is restricted to root user. However with the addition of GID support, we now have a GID allocator internal to the provisioner. This allows a storage admin of Openshift/Kubernetes cluster specify a POOL of GIDs for a particular storage class. That said, the provisioner introduced 2 more new parameters called gidMin and gidMax in Storage class. These are optional parameters.

[terminal]
# cat glusterfs-storageclass.yaml

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name:gluster-fast
provisioner: kubernetes.io/glusterfs
parameters:
resturl: “http://127.0.0.1:8081”
restuser: “admin”
secretNamespace: “default”
secretName: “heketi-secret”
gidMin: “2000”
gidMax: “4000”
[/terminal]

With the above storage class configuration, if you provision PVs using PV Claim requests, the PVs are created and given access to GID value for ex: 2001 which is a value between mentioned gidMin and gidMax value. If the gidMin and gidMax value are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647.

If you attach this PVClaim to a pod, the pod get the new GID value in its supplemental group, thus get access to the volume. How this GID is passed to the supplemental group is internally taken care by the provisioner. You can validate that the GID has been reflected in Supplemental Group ID of the pod using ‘id’ command inside the container.

For ex: inside the container where this PVC is attached:

[terminal]

# id

uid=1000060000 gid=0(root) groups=0(root),2001
[/terminal]

Now you should be able to write to the volume with the access of this Group. Once the Claim is deleted, previously allocated GID will go back to the GID pool.

Nice feature. Isnt it ?
lets look into the second feature ( Cluster Support ):

As yet, when you dynamically provision a volume, Heketi-the server who create volumes pick any of the cluster according to certain criteria. However there was a request from few users on, there should be a provision to specify a cluster or group of clusters for a storage class , so we added that support. According to me, this can help in scenarios for example, where storage admin want to specify a particular cluster for Dev and another cluster for TEST/QE departments, may be based on the hardware he has allocated for these departments and so on. That said, an admin is left with the choice of grouping nodes with faster disks ( SSDs) in one cluster and thus a storage class called ‘fast’ and other slow speed harddisks in another cluster and a subjected storage class ‘slow’. The PVs will be created based on this classification if you use ‘clusterid’ parameter in the storage class. With that it is possible now in dynamic provisioner to select a cluster from which you want to provision volume. Cool feature, Isnt it ?

[terminal]
apiVersion: storage.k8s.io/v1beta1

kind: StorageClass

metadata:

name: slow

provisioner: kubernetes.io/glusterfs

parameters:

resturl: “http://127.0.0.1:8081”

clusterid: “630372ccdc720a92c681fb928f27b53f”

restuser: “admin”

secretNamespace: “default”

secretName: “heketi-secret”

gidMin: “40000”

gidMax: “50000”

[/terminal]

* `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:”8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397″. This is an optional parameter.

Please feel free to share your thoughts on these new features and also let me know if you would like to see any further enhancements for the gluster dynamic provisioner.!! I am watching this space for your comments.

[How to] GlusterFS Dynamic Volume Provisioner in Kubernetes (>= v1.4 ) / Openshift.

You could have seen or tried the method of using glusterfs volumes in a kubernetes/openshift cluster as discussed in my previous blog post , however this involves more steps or this method is called `static provisioning`. In this article, I will discuss about a new method called `dynamic volume provisioning`.

I am happy to share that Gluster Dynamic Volume Provisioner is available in kubernetes tree since 1.4 release !!

This is one of the feature which enables the User/Developer in Kubernetes/Openshift to have Persistent Volume Claim ( PVC ) request to be satisfied dynamically without admin intervention. IMHO, it gives a nice user experience and its a good feature to have in container orchestrators. Till this feature came in, the persistent volumes in the store were created statically.

The static provisioning workflow looks like this:

Eventhough it was easy to perform the static provisioning of volumes, it has few limitations in my view.

*) The admin has to create the persistent volumes upfront and keep it in persistent store.

*) When a claim request comes to the controller, it check for the size of the request against the available PVs in the pool and if the ( available PV size >= size of the request ) it bind the claim.

The latter can lead to wastage of storage in most of the case.

These kinds of limitations have been lifted with Dynamic provisioning. Now the admin defines the storage classes and the user/developer request the persistent volumes using the storage class reference in the PVC request. The storage classes can pass the parameters of the plugin using the storage class key value pairs.

I have created the following diagram to simplify the workflow of dynamic (Ex: GlusterFS) provisioning.

As you can see in the above diagram, the GlusterFS plugin in Kubernetes/Openshift make use of “Heketi” to provision GlusterFS volumes. If you want to know more about Heketi and how it can be used to manage gluster clusters, please refer this Wiki. In short, Heketi is a volume manager for GlusterFS clusters. It manage Gluster trusted pool and create volumes based on demand.

Let us start from Storage Class which allows us to do dynamic provisioning in kubernetes.

Here is an example of the storage class parameters:

[terminal]
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: “http://127.0.0.1:8081”
restuser: “admin”
secretNamespace: “default”
secretName: “heketi-secret”

[/terminal]

Where

resturl : Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.

restuser : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.

secretNamespace + secretName : Identification of Secret instance that containes user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both secretNamespace and secretName are omitted.

for more details on these parameters please refer https://github.com/kubernetes/kubernetes/tree/master/examples/experimental/persistent-volume-provisioning

To summarize, the user/developer requests the persistent storage using the claim and mentions the storage class which need to be used/mapped with the claim. As soon as the claim request comes in, the GlusterFS plugin in Kubernetes create a volume with the requested size and BIND the persistent volume to Claim. When there is a request to delete the claim, the subjected volume is deleted from the backend gluster trusted pool. The glusterfs plugin in Kubernetes make use of ‘Heketi’ to provision a volume dynamically.

Here is the demo video of Dynamic GlusterFS provisioner in Kubernetes.

As always comments/suggestions/questions are welcome. ?