Kubernetes 101 for Bangalore Kubernauts .

We keep conducting Kubernetes and Openshift meetups as part of www.meetup.com/kubernetes-openshift-India-Meetup/ .

We see great momentum in this meetup group and lots of enthusiam around this. The last events were well received and lots of requests came in to have a hands on training on Kubernetes and Openshift. If you are still to catch up with these emerging technologies, dont delay, just join us in next event planned on 20-May-2017.

More details about this event can be found @ www.meetup.com/kubernetes-openshift-India-Meetup/events/239381714/

This will be a beginner oriented workshop. As a pre requisite, you need to install linux in your laptop, thats it.

We are also looking for volunteers to help and venue to conduct this event.

Please RSVP and let us know if you would like to help us on organizing this event.

ISCSI multipath support in Kubernetes (v1.6) or Openshift.

Multipathing is an important feature in most of the storage setups, so for ISCSI based storage systems.
In most of the ISCSI based storages, multiple paths can be defined as, same IQN shared with more than one portal IPs.
Even if one of the network interface/portal is down, the share/target can be accessed via other active interfaces/portals.
This is indeed a good feature considering I/O from ISCSI initiator to Target.
However the ISCSI plugin in kubernetes was not capable of making use of multipath feature and it
was always just one path configured default in kubernetes. If that path goes down, the target can not be accessed.

Recently I added multipath support to ISCSI kubernetes plugin with this Pull Request.

With this functionality, a kubernetes user/admin can specify the other Target Portal IPs in a new field called portals in ISCSI voulme. Thats the only change required from admin side. If there are mulitple portals, admin can mention these additional target portals in portals field as shown below.

The new structure will look like this.

If you are directly using above volume definition in POD spec, your pod spec may look like this.

Once the pod is up and running, you could check and verify below outputs from the Kubernetes Host where the pod is running:

ISCSI session looks like below:

The device paths:

I believe, this is a nice feature added to Kubernetes ISCSI storage. Please let me know your comments/feedback/suggestions on this.

[Gluster Dynamic Provisioner ] “glusterfs: failed to get endpoints..” , Do I need to create an endpont/service ?

Let me give more details about the endpoints or service wrt GlusterFS plugin in Kubernetes or Openshift. GlusterFS plugin is designed in such a way that it need a mandatory parameter ‘endpoint’ in its spec. Endpoint is same instance of api.endpoint in kube/openshift, ie it need IP addresses. In gluster plugin, we carry IP address of gluster nodes in the endpoint. When we manually create a PV we also need to create an endpoint and a headless service for the PV. I could call this state as ‘static provisioning’. This is tedious, as admin want to fetch the nodes and then create endpoints and keep it for the developer/user. We also heard same concern from community users about the difficulty of creating endpoints in each namespace where we want to run the app pods which use the gluster PVs. At the same time, there was some user reports where they liked the isolation it brings.

We tried to avoid this dependency of endpoints and thought about different designs to overcome this. It was bit difficult due to reasons like backward compatibility, security concerns..etc. But when we introduced dynamic provisioning of Gluster PVs which is available from Kube 1.4 version or from Openshift 3.4 version, this situation has changed. It is no longer a pain for admin to create an endpoint/service based on the volume which just got created. The dynamic provisioner will create the endpoint and service automatically based on the cluster where the volume is created.

The entire workflow has been made easy from user/admin point of view. The endpoint and service will be created in PVC namespace. If a user want to make use of this PVC in his application pod, the endpoint/service has to be in this namespace which is available with dynamic provisioning without any extra effort. In short, user/developer dont have to worry about the PV which should have the mention of endpoint.

I hope this helps.

Please let me know if you need more details on this or any comments on this..

Support for “Volume Types” option in Kubernetes GlusterFS dynamic provisioner.

Till now, there was no option to specify various volume types and specifications of the dynamically provisioned volumes in GlusterFS provisioner in Kubernetes or Openshift. This functionality has been added some time back to Kubernetes upstream and now a kubernetes/Openshift admin can choose the volume and its specification like volume types in Storage Class parameter.

This has been added in below format to the gluster plugins’ storage class.

Based on above mention, the volumes will be created in the trusted storage pool .

Please let me know if you have any comments/suggestions/feedback about this functionality or if you would like to see any other enhancements to Gluster Dynamic Provisioner in Kubernetes/Openshift.

“libgfapi-python” is now available as Fedora rpm. ( python-glusterfs-api)

I wanted to see this as distro package for long time, but it happened very recently. Eventhough there had a package review request, I couldnt follow up and get it completed. There was few other thoughts on this which also caused the delay. Any way, with the help of kaleb Keithely and Prasanth Pai it is now available in Fedora !.

There were many users wanted this rpm/package in distributions like fedora to make use of libgfapi python bindings and to become consumers of this api client.

Please give a try and let us know your feedback!

Why GlusterFS is contenarized ? Advantages ?

First of all GlusterFS is a userspace file system. Containers are designed for ‘user space applications’ , Isnt it? Once you contenarize your user space application, you get many advantages, so GlusterFS containers.

If I quote the advantages of Container ( for ex: docker ) from this link:

Docker brings in an API for container management, an image format and a possibility to use a remote registry for sharing containers. This scheme benefits both developers and system administrators with advantages such as:

Rapid application deployment – containers include the minimal runtime requirements of the application, reducing their size and allowing them to be deployed quickly.

Portability across machines – an application and all its dependencies can be bundled into a single container that is independent from the host version of Linux kernel, platform distribution, or deployment model. This container can be transfered to another machine that runs Docker, and executed there without compatibility issues.

Version control and component reuse
– you can track successive versions of a container, inspect differences, or roll-back to previous versions. Containers reuse components from the preceding layers, which makes them noticeably lightweight.

Sharing – you can use a remote repository to share your container with others. Red Hat provides a registry for this purpose, and it is also possible to configure your own private repository.

Lightweight footprint and minimal overhead
– Docker images are typically very small, which facilitates rapid delivery and reduces the time to deploy new application containers.

Simplified maintenance – Docker reduces effort and risk of problems with application dependencies.

Apart from above, we closely work on Container Operating systems like “Atomic host” which designed as container platform/OS to run your application containers, so GlusterFS can. It is not possible to use package managers like rpm and setup the system in your own way if you use this stripped container OSs. If we want to take advantage of these OSs, we have to be contenarized.

If the application is in a container, we get many advantages which comes with the container orchestration software like Kubernetes/Openshift. So, let Gluster take that advantage as well. For ex: if you deploy GlusterFS in a baremetal system there is no piece of code which monitor GlusterFS and if something goes wrong, admin intervention is required to bring it back, but if the gluster is contenarized the orchestrator does it for you like any other application.

Let us look at the deployment part, if you have to deploy GlusterFS in a new Kubernetes/Openshift node, you dont have to worry about the ‘preparation/setup’, ie Setting up the repositories, Installation of packages ..etc rather just label ( in case of DeamonSet deployment model in Kube/Openshift ) a node, You got new Gluster Node within seconds.

This list goes, but I have to stop.

I would like to wrap this article saying, the attempt of gluster containerization has been justified by the massive download of GlusterFS containers from docker hub.

hub.docker.com/r/gluster/gluster-centos/