One more ceph-csi release ? Yeah, v2.1.0 is here!

The Ceph-CSI team is excited that it has reached the next milestone with the release of v2.1.0! [1] This is another great release with many improvements for Ceph and CSI integration to use in production with Kubernetes/openshift clusters. Of all the many features and bug fixes here are just a few of the highlights.

Release Issue # https://github.com/ceph/ceph-csi/issues/816


# Changelog or Highlights:

## Features:
Add support for rbd static PVC
Move cephfs subvolume support from `Alpha` to `Beta`.
Added support for rbd topology-based provisioning.
Support externally managed configmap.
Updated Base image to ceph Octopus
Added csiImageKey to keep track of the image name in RADOS omap
Added E2E for helm charts
...

## Enhancements:
Implement CreateVolume with go-ceph which boosts performance.
Migrated from `dep` to `go modules`
Updated Kubernetes version to v1.18.0
Updated golang version to 1.13.9
Updated Kubernetes sidecar containers to latest version
E2E: Add Ability to test with non root user
...

## Bug Fixes:
Log an error message if cephfs mounter fails during init
Aligned with klog standards for logging
Added support in to run E2E in a different namespace
Removed cache functionality for cephfs plugin restart
rbd: fallback to inline image deletion if adding it as a task fails
code cleanup for static errors and unwanted code blocks
Fix mountoption issue in rbd
travis: re-enable running on arm64
...

## Deprecated:
GRPC metrics in cephcsi

## Documentation:
Added Document to cleanup stale resources
Updated ceph-csi support matrix
dev-guide: add reference to required go-ceph dependencies
Update upgrade doc for node hang issue
....

Many other bug fixes, code improvements, README updates are also part of this release. The container image is tagged with “v2.1.0” and its downloadable by #docker pull quay.io/cephcsi/cephcsi:v2.1.0

Kudos to the Ceph CSI community for all the hard work to reach this critical milestone!
The Ceph-CSI project ( https://github.com/ceph/ceph-csi/), as well as its thriving community, has continued to grow and we are happy to share that, this is our 7th release since Jul 12, 2019!!

Happy Hacking!

Release Issue #
https://github.com/ceph/ceph-csi/issues/806
[1] https://github.com/ceph/ceph-csi/releases/tag/v2.1.0

Deploy a ceph cluster using Rook (rook.io) in kubernetes

[Updated on 20-Jun-2020: Many changes in Rook Ceph in previous releases, so revisiting this blog article to accomodate the changes based on a ping in the slack 🙂 ] In this article we will talk about, how to deploy Ceph ( a software-defined storage) cluster using a Kubernetes operator called ‘rook’. Before we get into …

Read more

Gluster CSI driver 1.0.0 (pre) release is out!!

We are pleased to announce v1.0.0 (pre) release of GlusterFS CSI driver. The release source code can be downloaded from github.com/gluster/gluster-csi-driver/archive/1.0.0-pre.0.tar.gz. Compared to the previous beta version of the driver, this release makes Gluster CSI driver fully compatible with CSI spec v1.0.0 and Kubernetes release 1.13 ( kubernetes.io/blog/2018/12/03/kubernetes-1-13-release-announcement/ ) The CSI driver deployment can be …

Read more

CRDs , Operators/Controllers, Operator-sdk – Write your own controller for your kubernetes cluster – [Part 1]

You are here, so there are two possibilities, either you already know about below terms/strings or you want to know more about these strings. In any case, I have to touch upon these strings before we proceed further. Custom Resource Definitions ( CRD) Custom Resources ( CR) Operators/Controllers Operator SDK Custome Resource Definitions/CRDs: In the …

Read more

Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

[Update] There is a more advanced method of gluster volume provisioning available for kubernetes/openshift called `dynamic volume provisioning` as discussed here https://www.humblec.com/how-to-glusterfs-dynamic-volume-provisioner-in-kubernetes-v1-4-openshift/. However reading through this article is well encouraged to know what we had before dynamic volume provisioning and to know the method called `static provisioning` of volumes

.

.

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.

[terminal]
#oc get nodes
NAME LABELS STATUS AGE
dhcp42-144.example.com kubernetes.io/hostname=dhcp42-144.example.com,name=node3 Ready 15d
dhcp42-235.example.com kubernetes.io/hostname=dhcp42-235.example.com,name=node1 Ready 15d
dhcp43-174.example.com kubernetes.io/hostname=dhcp43-174.example.com,name=node2 Ready 15d
dhcp43-183.example.com kubernetes.io/hostname=dhcp43-183.example.com,name=master Ready,SchedulingDisabled 15d
[/terminal]

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

[terminal]
# gluster v status
Status of volume: gluster_vol
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick 170.22.42.84:/gluster_brick 49152 0 Y 8771
Brick 170.22.43.77:/gluster_brick 49152 0 Y 7443
NFS Server on localhost 2049 0 Y 7463
NFS Server on 170.22.42.84 2049 0 Y 8792
Task Status of Volume gluster_vol
——————————————————————————
There are no active volume tasks
[/terminal]

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites \o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.

[terminal]
# cat gluster_pod/gluster-service.yaml
apiVersion: “v1”
kind: “Service”
metadata:
name: “glusterfs-cluster”
spec:
ports:
– port: 1
# oc create -f gluster_pod/gluster-service.yaml
service “glusterfs-cluster” created
[/terminal]

Verify:

[terminal]
# oc get service
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
glusterfs-cluster 172.30.251.13 1/TCP 9m
kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 16d
[/terminal]

STEP 2: Create an Endpoint for the gluster service

[terminal]
# cat gluster_pod/gluster-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
– addresses:
– ip: 170.22.43.77
ports:
– port: 1
[/terminal]

The ip here is the glusterfs cluster ip.

[terminal]
# oc create -f gluster_pod/gluster-endpoints.yaml
endpoints “glusterfs-cluster” created
# oc get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 170.22.43.77:1 3m
kubernetes 170.22.43.183:8053,170.22.43.183:8443,170.22.43.183:8053 16d
[/terminal]

STEP 3: Create a PV for the gluster volume.

[terminal]
# cat gluster_pod/gluster-pv.yaml
apiVersion: “v1”
kind: “PersistentVolume”
metadata:
name: “gluster-default-volume”
spec:
capacity:
storage: “8Gi”
accessModes:
– “ReadWriteMany”
glusterfs:
endpoints: “glusterfs-cluster”
path: “gluster_vol”
readOnly: false
persistentVolumeReclaimPolicy: “Recycle”
[/terminal]

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.

[terminal]
# oc create -f gluster_pod/gluster-pv.yaml
persistentvolume “gluster-default-volume” created
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Available 36s
[/terminal]

STEP 4: Create a PVC for the gluster PV.

[terminal]
# cat gluster_pod/gluster-pvc.yaml
apiVersion: “v1”
kind: “PersistentVolumeClaim”
metadata:
name: “glusterfs-claim”
spec:
accessModes:
– “ReadWriteMany”
resources:
requests:
storage: “8Gi”
[/terminal]

Note: the Developer request for 8 Gb of storage with access mode rwx.

[terminal]
# oc create -f gluster_pod/gluster-pvc.yaml
persistentvolumeclaim “glusterfs-claim” created
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-claim Bound gluster-default-volume 8Gi RWX 14s
[/terminal]

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status

[terminal]
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Bound default/glusterfs-claim 5m
[/terminal]

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.

[terminal]
# cat gluster_pod/gluster_pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
– name: mygluster
image: humble/gluster-client
command: [“/usr/sbin/init”]
volumeMounts:
– mountPath: “/home”
name: gluster-default-volume
volumes:
– name: gluster-default-volume
persistentVolumeClaim:
claimName: glusterfs-claim
[/terminal]

The above pod definition will pull the humble/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.

[terminal]
# oc create -f gluster_pod/fedora_pod.yaml
pod “mypod” created
# oc get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1m
[/terminal]

Wow its running… lets go and check where it is running.

[terminal]
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec57d62e3837 humble/gluster-client “/usr/sbin/init” 4 minutes ago Up 4 minutes k8s_myfedora.dc1f7d7a_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_ed2eb8e5
1439dd72fb1d openshift3/ose-pod:v3.1.1.6 “/pod” 4 minutes ago Up 4 minutes k8s_POD.e071dbf6_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_4d6a7afb
[/terminal]

Found the Pod running successfully on one of the Kubernetes node.

On the host:

[terminal]
# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /var/lib/origin/openshift.local.volumes/pods/5d301443-ec20-11e5-9076-5254002e937b/volumes/kubernetes.io~glusterfs/gluster-default-volume
[/terminal]

I can see the gluster volume being mounted on the host \o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.

[terminal]
# docker exec -it ec57d62e3837 /bin/bash
[root@mypod /]# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /home
[/terminal]

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it

[terminal]
[root@mypod /]# mkdir /home/demo
[root@mypod /]# ls /home/
demo
[/terminal]

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.

Configure/install Swift and Gluster-Swift on a RHEL/CentOS system

Here is step by step guide which helps you to configure/install Swift and Gluster-Swift on a RHEL system.   Install EPEL repo: wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -ivh epel-release-6-8.noarch.rpm Install Openstack Repo: cd /etc/yum.repos.d wget http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6-openstack-trunk.repo Install glusterfs Repo: cd /etc/yum.repos.d wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo Check if EPEL, glusterfs, and Openstack repos have been installed correctly: yum repolist repo …

Read more

OSv ( Best OS for Cloud Workloads ) have a release announcement from Cloudius Systems!!

When ‘Dor Laor’ & ‘Avi Kivity’ stepped out of Red Hat, I was thinking, what is next ? 🙂 and it stands now on OSv with this mailing thread.. http://www.mail-archive.com/kvm@vger.kernel.org/msg95768.html It claims “OSv, probably the best OS for cloud workloads! and OSv is designed from the ground up to execute a single application on top …

Read more