Kubernetes 101 for Bangalore Kubernauts .

We keep conducting Kubernetes and Openshift meetups as part of www.meetup.com/kubernetes-openshift-India-Meetup/ .

We see great momentum in this meetup group and lots of enthusiam around this. The last events were well received and lots of requests came in to have a hands on training on Kubernetes and Openshift. If you are still to catch up with these emerging technologies, dont delay, just join us in next event planned on 20-May-2017.

More details about this event can be found @ www.meetup.com/kubernetes-openshift-India-Meetup/events/239381714/

This will be a beginner oriented workshop. As a pre requisite, you need to install linux in your laptop, thats it.

We are also looking for volunteers to help and venue to conduct this event.

Please RSVP and let us know if you would like to help us on organizing this event.

ISCSI multipath support in Kubernetes (v1.6) or Openshift.

Multipathing is an important feature in most of the storage setups, so for ISCSI based storage systems.
In most of the ISCSI based storages, multiple paths can be defined as, same IQN shared with more than one portal IPs.
Even if one of the network interface/portal is down, the share/target can be accessed via other active interfaces/portals.
This is indeed a good feature considering I/O from ISCSI initiator to Target.
However the ISCSI plugin in kubernetes was not capable of making use of multipath feature and it
was always just one path configured default in kubernetes. If that path goes down, the target can not be accessed.

Recently I added multipath support to ISCSI kubernetes plugin with this Pull Request.

With this functionality, a kubernetes user/admin can specify the other Target Portal IPs in a new field called portals in ISCSI voulme. Thats the only change required from admin side. If there are mulitple portals, admin can mention these additional target portals in portals field as shown below.

The new structure will look like this.

If you are directly using above volume definition in POD spec, your pod spec may look like this.

Once the pod is up and running, you could check and verify below outputs from the Kubernetes Host where the pod is running:

ISCSI session looks like below:

The device paths:

I believe, this is a nice feature added to Kubernetes ISCSI storage. Please let me know your comments/feedback/suggestions on this.

Support for “Volume Types” option in Kubernetes GlusterFS dynamic provisioner.

Till now, there was no option to specify various volume types and specifications of the dynamically provisioned volumes in GlusterFS provisioner in Kubernetes or Openshift. This functionality has been added some time back to Kubernetes upstream and now a kubernetes/Openshift admin can choose the volume and its specification like volume types in Storage Class parameter.

This has been added in below format to the gluster plugins’ storage class.

Based on above mention, the volumes will be created in the trusted storage pool .

Please let me know if you have any comments/suggestions/feedback about this functionality or if you would like to see any other enhancements to Gluster Dynamic Provisioner in Kubernetes/Openshift.

GlusterFS Containers with Docker, Kubernetes and Openshift

Thought of sharing consolidated news on GlusterFS containers efforts here. Below is the snip of the email which I sent few days back to gluster-users and gluster-devel mailing list. Hope it gives a summary, if not please let me know.


I would like to provide you a status update on the developments with GlusterFS containers and its presence in projects like docker, kubernetes and Openshift.

We have containerized GlusterFS with base image of CentOS and Fedora and its available at Docker Hub[1] to consume.

The Dockerfile of the Image can be found at github[2].

You can pull the image with


# docker pull gluster/gluster-centos
# docker pull gluster/gluster-fedora

The exact steps to be followed to run GlusterFS container is mentioned here[3].

We can deploy GlusterFS pods in Kubernetes Environment and an example blog about this setup can be found here [4].

There is GlusterFS volume plugin available in Kubernetes and openshift v3 which provides Persistent Volume
to the containers in the Environment, How to use GlusterFS containers for Persistent Volume and Persistent Volume Claim in Openshift has been recorded at [5].

[1]https://hub.docker.com/r/gluster/
[2]https://github.com/gluster/docker/
[3]http://tinyurl.com/jupgene
[4]http://tinyurl.com/zsrz36y
[5]http://tinyurl.com/hne8g7o

Please let us know if you have any comments/suggestions/feedback.

Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites \o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.

Verify:

STEP 2: Create an Endpoint for the gluster service

The ip here is the glusterfs cluster ip.

STEP 3: Create a PV for the gluster volume.

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.

STEP 4: Create a PVC for the gluster PV.

Note: the Developer request for 8 Gb of storage with access mode rwx.

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.

The above pod definition will pull the humble/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.

Wow its running… lets go and check where it is running.

Found the Pod running successfully on one of the Kubernetes node.

On the host:

I can see the gluster volume being mounted on the host \o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.