Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites \o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.

Verify:

STEP 2: Create an Endpoint for the gluster service

The ip here is the glusterfs cluster ip.

STEP 3: Create a PV for the gluster volume.

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.

STEP 4: Create a PVC for the gluster PV.

Note: the Developer request for 8 Gb of storage with access mode rwx.

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.

The above pod definition will pull the humble/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.

Wow its running… lets go and check where it is running.

Found the Pod running successfully on one of the Kubernetes node.

On the host:

I can see the gluster volume being mounted on the host \o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.

ovirt – docker integration !!

Previously Oved had published a blog post about the same topic and Today I noticed this presentation from Federico and surprised to see the current status of this attempt!! things are really fast and promising!

[gview file=”http://www.ovirt.org/images/d/dd/2014-ovirt-docker-integration.pdf”]

Configure/install Swift and Gluster-Swift on a RHEL/CentOS system

Here is step by step guide which helps you to configure/install Swift and Gluster-Swift on a RHEL system.

 

Install EPEL repo:

Install Openstack Repo:

Install glusterfs Repo:

Check if EPEL, glusterfs, and Openstack repos have been installed correctly:

Install Openstack Swift

Install gluster-swift from latest builds here: build.gluster.org/ As of now, RHEL6 RPMs are available only for grizzly version. We can just go ahead and use fedora builds for now. Alternatively, you can get the gluster-swift source code and run makerpm.sh in RHEL box.

The –nodeps option is mentioned as gluster-swift has hardcoded dependency on swift 1.9.1 Installing glusterfs

To perform the following tasks:

Create volumes and start Copy and rename conf files in /etc/swift Generate ring files

Follow the Quick Start Guide here : github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md

Troubleshooting If you have python-jinja2 as missing dependency, you can install python-jinja2 from here: rpm.pbone.net/index.php3/stat/4/idpl/18007532/dir/redhat_el_6/com/python-jinja2-2.2.1-1.el6.rf.x86_64.rpm.html

yum install ftp.univie.ac.at/systems/linux/dag/redhat/el6/en/x86_64/dag/RPMS/python-jinja2-2.2.1-1.el6.rf.x86_64.rpm If you are unable to find mkfs.xfs command, you’ll have to install xfsprogs package from here: rpmfind.net/linux/rpm2html/search.php?query=xfsprogs If the logs in /var/log/httpd contain [Errno 111] ECONNREFUSED,

you need to set selinux to permissive or disabled by editing /ets/sysconfig/selinux and verify by running: getenforce

OSv ( Best OS for Cloud Workloads ) have a release announcement from Cloudius Systems!!

When ‘Dor Laor’ & ‘Avi Kivity’ stepped out of Red Hat, I was thinking, what is next ? 🙂 and it stands now on OSv with this mailing thread.. www.mail-archive.com/kvm@vger.kernel.org/msg95768.html

It claims “OSv, probably the best OS for cloud workloads! and OSv is designed from the ground up to execute a single application on top of a hypervisor, resulting in superior performance and effortless management”

For those who havent heard it, “OSv reduces the memory and cpu overhead imposed by traditional OS. Scheduling is lightweight, the application and the kernel cooperate, memory pools are shared. It provides unparalleled short latencies and constant predictable performance, translated directly to capex saving by reduction of the number of OS instances/sizes.”

The project comes from Cloudius Systems.

Obviously, running OSv under KVM was my first try..

Let me show how it went!.. The execution was held on ‘Fedora 19’ system. Nothing much to perform to launch OSv than Pekka pointed out in his github.

1) First of all lets download the OS image:


2) Then you need to create a ‘qemu-ifup.sh’ script with below:

[Humble@localhost osv]$ ls
osv-v0.01.qcow2

3) Make sure, you have loaded “KVM” modules in your Fedora 19 system:

4) Now run the VM by below command :

As soon as you ran above command, you can see guest booting :

WoW! . You have a shell in 1/2 Seconds …

If I call above messages as ‘dmesg’ in traditional way, Did you notice ?


VFS: mounting ramfs at /
VFS: mounting devfs at /dev

VFS: mounting zfs at /usr
zfs: mounting osv/usr from device /dev/vblk0.1

As you are, I dont know which commands can be tried… so, I gave a try with ‘help’ first and it offered ‘help’ 🙂

Meanwhile I discussed a usability bug [1]( with Pekka and Or Cohen and got it fixed very quickly.. so team is active and on it !!

[1] groups.google.com/forum/#!topic/osv-dev/t2I07p5kvO0

References:

1) OSv Project Home Page
2) Osv Git hub
3) OSv presentation

[update]

Very recently ‘Glauber Costa’ came up with a write up on “Comparison between OSv and Containers’.. It should be a worth read..

Containers Vs Osv