[Alpha] GlusterFS CSI ( Container Storage Interface) Driver for Container Orchestrators!

Every Container or Cloud space storage vendor wants a standard interface for an unbiased solution development which then will not require them to have a non-trivial testing matrix . “Container Storage Interface” (CSI) is a proposed new Industry standard for cluster-wide volume plugins. CSI will enable storage vendors (SP) to develop a plugin once and …

Read more

GlusterFS Containers with Docker, Kubernetes and Openshift

Thought of sharing consolidated news on GlusterFS containers efforts here. Below is the snip of the email which I sent a few days back to gluster-users and gluster-devel mailing list. I hope it gives a summary, if not please let me know.


I would like to provide you a status update on the developments with GlusterFS containers and its presence in projects like Docker, Kubernetes, and Openshift.

We have containerized GlusterFS with base image of CentOS and Fedora and its available at Docker Hub[1] to consume.

The Dockerfile of the image can be found at github[2].

You can pull the image with

[terminal]
# docker pull gluster/gluster-centos
# docker pull gluster/gluster-fedora
[/terminal]

The exact steps to be followed to run GlusterFS container is mentioned here[3].

We can deploy GlusterFS pods in Kubernetes Environment and an example blog about this setup can be found here [4].

There is GlusterFS volume plugin available in Kubernetes and openshift v3 which provides Persistent Volume
to the containers in the Environment, How to use GlusterFS containers for Persistent Volume and Persistent Volume Claim in Openshift has been recorded at [5].

[1]https://hub.docker.com/r/gluster/
[2]https://github.com/gluster/docker/
[3]http://tinyurl.com/jupgene
[4]http://tinyurl.com/zsrz36y
[5]http://tinyurl.com/hne8g7o

Please let us know if you have any comments/suggestions/feedback.

Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

[Update] There is a more advanced method of gluster volume provisioning available for kubernetes/openshift called `dynamic volume provisioning` as discussed here https://www.humblec.com/how-to-glusterfs-dynamic-volume-provisioner-in-kubernetes-v1-4-openshift/. However reading through this article is well encouraged to know what we had before dynamic volume provisioning and to know the method called `static provisioning` of volumes

.

.

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.

[terminal]
#oc get nodes
NAME LABELS STATUS AGE
dhcp42-144.example.com kubernetes.io/hostname=dhcp42-144.example.com,name=node3 Ready 15d
dhcp42-235.example.com kubernetes.io/hostname=dhcp42-235.example.com,name=node1 Ready 15d
dhcp43-174.example.com kubernetes.io/hostname=dhcp43-174.example.com,name=node2 Ready 15d
dhcp43-183.example.com kubernetes.io/hostname=dhcp43-183.example.com,name=master Ready,SchedulingDisabled 15d
[/terminal]

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

[terminal]
# gluster v status
Status of volume: gluster_vol
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick 170.22.42.84:/gluster_brick 49152 0 Y 8771
Brick 170.22.43.77:/gluster_brick 49152 0 Y 7443
NFS Server on localhost 2049 0 Y 7463
NFS Server on 170.22.42.84 2049 0 Y 8792
Task Status of Volume gluster_vol
——————————————————————————
There are no active volume tasks
[/terminal]

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites \o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.

[terminal]
# cat gluster_pod/gluster-service.yaml
apiVersion: “v1”
kind: “Service”
metadata:
name: “glusterfs-cluster”
spec:
ports:
– port: 1
# oc create -f gluster_pod/gluster-service.yaml
service “glusterfs-cluster” created
[/terminal]

Verify:

[terminal]
# oc get service
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
glusterfs-cluster 172.30.251.13 1/TCP 9m
kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 16d
[/terminal]

STEP 2: Create an Endpoint for the gluster service

[terminal]
# cat gluster_pod/gluster-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
– addresses:
– ip: 170.22.43.77
ports:
– port: 1
[/terminal]

The ip here is the glusterfs cluster ip.

[terminal]
# oc create -f gluster_pod/gluster-endpoints.yaml
endpoints “glusterfs-cluster” created
# oc get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 170.22.43.77:1 3m
kubernetes 170.22.43.183:8053,170.22.43.183:8443,170.22.43.183:8053 16d
[/terminal]

STEP 3: Create a PV for the gluster volume.

[terminal]
# cat gluster_pod/gluster-pv.yaml
apiVersion: “v1”
kind: “PersistentVolume”
metadata:
name: “gluster-default-volume”
spec:
capacity:
storage: “8Gi”
accessModes:
– “ReadWriteMany”
glusterfs:
endpoints: “glusterfs-cluster”
path: “gluster_vol”
readOnly: false
persistentVolumeReclaimPolicy: “Recycle”
[/terminal]

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.

[terminal]
# oc create -f gluster_pod/gluster-pv.yaml
persistentvolume “gluster-default-volume” created
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Available 36s
[/terminal]

STEP 4: Create a PVC for the gluster PV.

[terminal]
# cat gluster_pod/gluster-pvc.yaml
apiVersion: “v1”
kind: “PersistentVolumeClaim”
metadata:
name: “glusterfs-claim”
spec:
accessModes:
– “ReadWriteMany”
resources:
requests:
storage: “8Gi”
[/terminal]

Note: the Developer request for 8 Gb of storage with access mode rwx.

[terminal]
# oc create -f gluster_pod/gluster-pvc.yaml
persistentvolumeclaim “glusterfs-claim” created
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-claim Bound gluster-default-volume 8Gi RWX 14s
[/terminal]

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status

[terminal]
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Bound default/glusterfs-claim 5m
[/terminal]

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.

[terminal]
# cat gluster_pod/gluster_pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
– name: mygluster
image: humble/gluster-client
command: [“/usr/sbin/init”]
volumeMounts:
– mountPath: “/home”
name: gluster-default-volume
volumes:
– name: gluster-default-volume
persistentVolumeClaim:
claimName: glusterfs-claim
[/terminal]

The above pod definition will pull the humble/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.

[terminal]
# oc create -f gluster_pod/fedora_pod.yaml
pod “mypod” created
# oc get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1m
[/terminal]

Wow its running… lets go and check where it is running.

[terminal]
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec57d62e3837 humble/gluster-client “/usr/sbin/init” 4 minutes ago Up 4 minutes k8s_myfedora.dc1f7d7a_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_ed2eb8e5
1439dd72fb1d openshift3/ose-pod:v3.1.1.6 “/pod” 4 minutes ago Up 4 minutes k8s_POD.e071dbf6_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_4d6a7afb
[/terminal]

Found the Pod running successfully on one of the Kubernetes node.

On the host:

[terminal]
# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /var/lib/origin/openshift.local.volumes/pods/5d301443-ec20-11e5-9076-5254002e937b/volumes/kubernetes.io~glusterfs/gluster-default-volume
[/terminal]

I can see the gluster volume being mounted on the host \o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.

[terminal]
# docker exec -it ec57d62e3837 /bin/bash
[root@mypod /]# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /home
[/terminal]

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it

[terminal]
[root@mypod /]# mkdir /home/demo
[root@mypod /]# ls /home/
demo
[/terminal]

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.

GlusterFS containers in Kubernetes Cluster for persistent data store !!

Everything is containerized, so Gluster . As you know, Gluster Container images are available for long time ( for both CentOS and Fedora ) in Docker hub. In previous blog posts, we saw how to build/run Gluster Containers. In this setup, we will try to set up a Kubernetes cluster with Gluster containers. If you dont know much about kubernetes , please go through this . In short, kubernetes is an orchestration software for container environment which brings the services like scheduling, service discovery..etc. We will deploy a kubernetes cluster in couple of atomic nodes. Then run Gluster containers on these atomic hosts via kubernetes. Once the gluster containers are running, we will form a trusted pool out of these gluster containers and export a volume, so that other application containers can make use of this volume to store its data in a persistent way!!.

Sounds interesting ? Yes, let us start.

NOTE: This article also discuss the steps to configure etcd server ( a key value store).
. For this particular setup we may not need to configure etcd. However your environment may need, for example to configure flannel.

Setup

Three centos ( You can also use fedora/RHEL) atomic hosts :

[terminal]
centos-atomic-KubeMaster
centos-atomic-Kubenode1
centos-atomic-Kubenode2
[/terminal]

To configure/install CentOS atomic hosts, please follow the steps mentioned here.
and the atomic images can be downloaded from here

Then start the atomic installation, if cloud init is configured, it will come into play and ask for “atomic host” login.

[terminal]
username: centos
password: atomic

[/terminal]

Note: The above is based on the cloud-init configuration. If you have customized the cloud-init configuration for different username and password, please supply the same. (wait till the vm to completely load meta-data and user-data. else it will throw invalid login till its completely loaded)

At this stage we have three atomic hosts.:

[terminal]
10.70.42.184 centos-atomic-KubeMaster
10.70.42.29 centos-atomic-Kubenode1
10.70.43.88 centos-atomic-Kubenode2

[/terminal]

If you already have this setup, make sure all the machines are able to talk to each other.

First things first,

[terminal]
-bash-4.2# atomic host upgrade

[/terminal]

Upgrade your system to latest docker, etcd, kubernetes..etc, in all nodes.
With the three systems in place, the next thing is to set up Kubernetes. Setting up Kubernetes on the Master, select any system to be master.

1. Etcd configuration:
Edit the `/etc/etcd/etcd.conf`. The etcd service needs to be configured to listen on all interfaces to ports 2380. `(ETCD_LISTEN_PEER_URLS)` and port 2379 `(ETCD_LISTEN_CLIENT_URLS)`, and listen on 2380 on localhost `(ETCD_LISTEN_PEER_URLS)`

[terminal]
-bash-4.2# cat /etc/etcd/etcd.conf | grep -v “#”
ETCD_NAME=default
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=”http://0.0.0.0:2380″
ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″
ETCD_ADVERTISE_CLIENT_URLS=”http://0.0.0.0:2379″

[/terminal]
2. Kubernetes Configuration:

Edit the `/etc/kubernetes/config` file and change the KUBE_MASTER line to identify the location of your master server (it points to 127.0.0.1, by default). Leave other settings as they are.

[terminal]
KUBE_MASTER=”–master=http://10.70.42.184:8080″

[/terminal]
3. Kubernetes apiserver Configuration:

Edit the `/etc/kubernetes/apiserver` and add a new `KUBE_ETCD_SERVERS` line (as shown below), then review and change other lines in the apiserver configuration file. Change `KUBE_API_ADDRESS` to listen on all network addresses(0.0.0.0), instead of just localhost. Set an address range for the `KUBE_SERVICE_ADDRESS` that Kubernetes can use to assign to services (see a description of this address below). Finally, remove the term “ServiceAccount” from the KUBE_ADMISSION_CONTROL instruction.

[terminal]
-bash-4.2# cat /etc/kubernetes/apiserver | grep -v “#”
KUBE_API_ADDRESS=”–address=0.0.0.0″
KUBE_ETCD_SERVERS=”–etcd_servers=http://10.70.42.184:2379″
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.100.0/24″
KUBE_ADMISSION_CONTROL=”–admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
KUBE_API_ARGS=””

[/terminal]

4. Start master services:

To run the Kubernetes master services, you need to enable and start several systemd services. From the master, run the following for loop to start and enable Kubernetes systemd services on the master:

[terminal]
-bash-4.2# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES;
systemctl enable $SERVICES;
systemctl status $SERVICES;
done

[/terminal]

5. Setting up Kubernetes on the Nodes

On each of the two Kubernetes nodes, you need to edit several configuration files and start and enable several Kubernetes systemd services:

1.Edit `/etc/kubernetes/config:`

Edit the `KUBE_MASTER` line in this file to identify the location of your master (it is 127.0.0.1, by default). allow_privileged must be set to true. Leave other settings as they are.

[terminal]
KUBE_ALLOW_PRIV=”–allow_privileged=true”
KUBE_MASTER=”–master=http://10.70.42.184:8080″

[/terminal]

2.Edit /etc/kubernetes/kubelet:

In this file on each node, modify `KUBELET_ADDRESS` (0.0.0.0 to listen on all network interfaces), `KUBELET_HOSTNAME` (replace hostname_override with the hostname or IP address of the local system). You may leave this blank to use the actual hostname, set `KUBELET_ARGS`, and `KUBELET_API_SERVER` as below. `–host-network-sources=*` is specified to use the host networking option of docker(–net=host). You can use any networking mode of docker. However in this setup, we use `–net=host` option to make sure we get maximum performance.

[terminal]
bash-4.2# cat /etc/kubernetes/kubelet | grep -v “#”
KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_HOSTNAME=”–hostname_override=”
KUBELET_API_SERVER=”–api_servers=http://10.70.42.184:8080″
KUBELET_ARGS=”–register-node=true –host-network-sources=*”

[/terminal]
3. Edit /etc/kubernetes/proxy:
No settings are required in this file. If you have set `KUBE_PROXY_ARGS`, you can comment it out:

[terminal]
bash-4.2# cat /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
#KUBE_PROXY_ARGS=”–master=http://master.example.com:8080″

[/terminal]

4. Start the Kubernetes nodes systemd services:

On each node, you need to start several services associated with a Kubernetes node:

[terminal]
-bash-4.2# for SERVICES in docker kube-proxy.service kubelet.service; do
systemctl restart $SERVICES;
systemctl enable $SERVICES;
systemctl status $SERVICES; done

[/terminal]

5. Check the services:
Run the `netstat` command on each of the three systems to check which ports the services are running on. The etcd service should only be running on the master.

From master:

[terminal]
-bash-4.2# netstat -tulnp | grep -E “(kube)|(etcd)”
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 17805/kube-schedule
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 17764/kube-controll
tcp6 0 0 :::6443 :::* LISTEN 17833/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 17668/etcd
tcp6 0 0 :::2380 :::* LISTEN 17668/etcd
tcp6 0 0 :::8080 :::* LISTEN 17833/kube-apiserve

[/terminal]

From nodes:

[terminal]
-bash-4.2# netstat -tulnp | grep kube
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 104398/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 104331/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 104398/kubelet
tcp6 0 0 :::57421 :::* LISTEN 104331/kube-proxy
tcp6 0 0 :::10255 :::* LISTEN 104398/kubelet
tcp6 0 0 :::34269 :::* LISTEN 104331/kube-proxy
tcp6 0 0 :::58239 :::* LISTEN 104331/kube-proxy
tcp6 0 0 :::4194 :::* LISTEN 104398/kubelet

Read more

“Running with unpopulated /etc” – Failing to run systemd based container ?

Recently I experienced, systemd based container fails to run in certain version of distros.

For ex: If I run my container with systemd I get below messages.

[terminal]
#docker run –rm -t -i –privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro
[/terminal]

[terminal]
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization docker.
Detected architecture x86-64.
Running with unpopulated /etc.
….
Set hostname to .
Initializing machine ID from random generator.
Populated /etc with preset unit settings.
Unit etc-hosts.mount is bound to inactive unit dev-mapper-X.X.root.device. Stopping, too.
Unit etc-hostname.mount is bound to inactive unit dev-mapper-X.X.root.device. Stopping, too….
Unit etc-resolv.conf.mount is bound to inactive unit dev-mapper-X.X.root.device. Stopping, too.
Cannot add dependency job for unit display-manager.service, ignoring: Unit display-manager.service failed to load: No such file or directory.
Startup finished in 106ms.
Failed to create unit file /run/systemd/generator.late/network.service: File exists
Failed to create unit file /run/systemd/generator.late/netconsole.service: File exists

[/terminal]

The line ‘Running with unpopulated /etc’ looked suspicious to me, after some attempts we were able to conclude that, the things were going wrong in absense of ‘/etc/machine-id’ file which used to be there. If you came across similar to this situation, make an entry in your docker file to create /etc/machine-id as shown below and give a try!

[terminal]
#RUN touch /etc/machine-id
[/terminal]

now, build your image and start the container from a new image. Let me know how it goes.

Docker gotchas ( 1. ENTRYPOINT Vs CMD in Dockerfile)

Well, most of the folks are really confused about the difference between ENTRYPOINT and CMD, so you.
Its simple. Whatever you fill in CMD will be given to the ENTRYPOINT as an argument.

For ex: If you have a dockerfile like this.

CMD ["-tupln"]
ENTRYPOINT ["/usr/bin/netstat"]

and when you run the container without any other command ( For ex:#docker run ..... )
The above instruction basically make it something like "netstat -tulpn" for the container to execute.
Also there is a default entrypoint for each container which is nothing but "/bin/sh -c"
. However note that, the CMD used at end of the " #docker run ... .. " will overwrite the default command.

You also need to pay some attention on the syntax you use for CMD or ENTRYPOINT. The difference when you use array syntax ( CMD [" "] or shell syntax ( CMD ls -l) is nothing but, if you are not using the array syntax the arguments are prepended with the default entry point which is nothing but "/bin/sh -c". It may generate unexpected result. So, its always adviceable to use array syntax when using CMD or ENTRYPOINT.

Docker Tips and Tricks

In this article I will share some of the docker commands which I use at times. May be it can help you as well.

1) List all the containers including those are not running, by default only running containers are listed.

#docker ps -a

2) Delete all stopped containers.

$ docker rm $(docker ps -a -q)

3) IP address of all running containers.

$docker inspect $(docker ps -q) | grep IPAddress | cut -d ‘”‘ -f 4

4) If you want to list IP address of one of the container.

#docker inspect -f ‘{{ .NetworkSettings.IPAddress }}’ “container ID”

5) If you want to get into a container, use “docker exec” command. You can treat this as a replacement for “sshd” or “nsenter”.

$docker exec -ti “container ID” /bin/bash

6) Stop all running containers.

$docker stop $(docker ps -a -q)

7) To delete all existing docker images.

$docker rmi $(docker images -q -a)

8) To examine the logs of a container, use `docker logs` command with `-f` option which follows log output and continues streaming the new output from the container’s STDOUT and STDERR.:

$docker logs -f “Container ID”

9) Get the ID of last ran container.

$docker ps -a -q -l –no-trunc=true

10) Most of the time we want to `exec` into the container we just ran.

#docker exec -ti ( docker ps -a -q -l) /bin/bash

11) To list out the environment variables of an image

#docker run env

Gluster volume plugin of docker !!

Does gluster volume plugin available for docker?

Yes, its available here .

This article talks about how to use this plugin and make use of gluster volume when spawning docker containers.

For the gluster volume plugin to work, we need an experimental build of docker which can be fetched from docker Github. If you dont have the experimental binary of docker running in your system get it from docker Github.

https://github.com/docker/docker/tree/master/experimental have instructions on how to run docker experimental binary.

Once your docker daemon is running from the experimental build, pull gluster volume plugin from github source.

[terminal]
[root@dhcp35-20 go]# go get github.com/calavera/docker-volume-glusterfs
[/terminal]

As mentioned in the README file in github, you need to execute ‘docker-volume-glusterfs’ as shown below. That said, here the IP, “10.70.1.100” is my gluster server which export a replica volume called ‘test-vol’. For more details on gluster volume types and configuration please refer http://gluster.readthedocs.org/en/latest/ .

[terminal]
[root@dhcp35-20 go]# docker-volume-glusterfs -servers 10.70.1.100
[/terminal]

[terminal]
[root@dhcp35-20 check1]# ps aux |grep docker
root 7674 0.0 0.0 7612 1596 pts/13 Sl+ 12:47 0:00 ./docker-volume-glusterfs -servers 10.70.1.100
root 8169 0.0 0.3 558828 29924 pts/14 Sl 12:52 0:00 ./docker-latest daemon
[/terminal]

Once its done, we can spawn containers as shown below, where ‘test-vol’ is the gluster volume name and “/b1” is the mount point in spawned container, ‘docker.io/fedora’ is the image name.

‘touch /b1/second” create a file called ‘second’ in “/b1” mount point.

[root@]# ./docker-latest run -it –volume-driver glusterfs –volume test-vol:/b1 docker.io/fedora touch /b1/second

[terminal]
INFO[4891] POST /v1.21/containers/create
INFO[4892] POST /v1.21/containers/b3b61146188db97e3b2c96e1ae38dc53478287d557e24a26b0dcbf09be68140a/attach?stderr=1&stdin=1&stdout=1&stream=1
INFO[4892] POST /v1.21/containers/b3b61146188db97e3b2c96e1ae38dc53478287d557e24a26b0dcbf09be68140a/start
INFO[4892] POST /v1.21/containers/b3b61146188db97e3b2c96e1ae38dc53478287d557e24a26b0dcbf09be68140a/resize?h=46&w=190
INFO[4892] GET /v1.21/containers/b3b61146188db97e3b2c96e1ae38dc53478287d557e24a26b0dcbf09be68140a/json
[/terminal]

Let us verify whether the file creation worked successfully and the new file (second) is available in the brick path of gluster server node.

From ‘test-vol’ volume details, we can see that “/home/test-brick1” is one leg of replica volume in my setup.

[terminal]
Volume Name: test-vol
Type: Replicate
Volume ID: 2cebb33f-e849-40c1-9344-939025f80b1f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.1.100:/home/test-brick1
Brick2: 10.70.1.101:/home/test-brick2
Options Reconfigured:
performance.readdir-ahead: on
You have new mail in /var/spool/mail/root
[root@dhcp1-100 test-brick1]#

[root@dhcp1-100 test-brick1]# pwd
/home/test-brick1
[root@dhcp1-100 test-brick1]# ls
second
[/terminal]

Awesome !! Isn’t it?

Thanks https://github.com/calavera for the plugin & Thanks neependra for pointers.

Building GlusterFS in a docker container

Although Setting up a glusterfs environment is a pretty simple and straightforward procedure, Gluster community do maintain docker images for gluster both in Fedora and CentOS in the docker hub for the ease of users. This blog is intended to walk the user through the steps of running GlusterFS with the help of docker.
The community maintains docker images GlusterFS release 3.6 in both Fedora-21 and CentOS-7. The following are the steps to build the GlusterFS docker images that we maintain:
To pull the docker image from the docker hub run the following command:
For GlusterFS-3.6 in Fedora-21

$ docker pull gluster/gluster-fedora

For GlusterFS-3.6 in CentOS-7

$ docker pull gluster/gluster-centos

This will fetch and build the docker image for you from the docker hub.
Alternatively, one could build the image from the Dockerfile directly. For this, one should pull the Gluster-Fedora Dockerfile from the source repository and build the image using that. For getting the source, One can make use of git:

$ git clone git@github.com:gluster/docker.git

This repository consists of Dockerfiles for GlusterFS built in both CentOS and Fedora distributions. Once you clone the repository, to build the image, run the following commands:
For Fedora,

$ docker build -t gluster-fedora docker/Fedora/Dockerfile

For CentOS,

$ docker build -t gluster-centos docker/CentOS/Dockerfile

This command will build the docker image from the Dockerfile you just cloned and will be assigned the name gluster-fedora or gluster-centos respectively. ‘-t’ option is used to give a name to the image we are about the build.
Once the image is built in either of the above two steps, we can now run the container with gluster daemon running. For this run the command:
Step 1:

$ docker run –privileged -d -p 22 -v /sys/fs/cgroup:/sys/fs/cgroup:ro image name

( is either gluster-fedora or gluster-centos as per the configurations so far)
This is running container in detach mode, init script runs behind the screen.
Once docker returned the container id, you can get into the container via below command.

#docker exec -ti bash

Step2:

$docker run –privileged -p 22 -v /sys/fs/cgroup:/sys/fs/cgroup:ro image name

In This mode it runs the init scripts in the container, you have to detach from this process.
To detach this container you can press Ctrl p + Ctrl q

Systemd has been installed and is running in the container we maintain. This is to ensure that gluster daemon is up and running by the time we boot up our container and also to deal with the “Failed to get D-Bus connection” issue. To fix the issue Dan Walsh’s blog on the same matter has been the only resource: developerblog.redhat.com/2014/05/05/running-systemd-within-docker-container/
For systemd to run without crashing it is necessary to run the container in the privileged mode since systemd requires CAP_SYS_ADMIN capability. As per the help of docker run shows, ‘-t’ option is given to allocate a pseudo-TTY and ‘-i’ stands for the interactive mode which keeps STDIN open even if not attached. The port 22 has been published to the host so that one can ssh into the container that will be running once this command is issued. In the docker file, the password for the root has been changed to ‘password’ for user to ssh into the running container.
One issued, this will boot up the Fedora or CentOS system and you have a container started with glusterd running in it. Now to login to the container, one need to inspect the IP of the container running. To get the ID of the container, one can do:

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d273cc739c9d gluster/gluster-fedora:latest “/usr/sbin/init” 3 minutes ago Up 3 minutes 49157/tcp, 49161/tcp, 49158/tcp, 38466/tcp, 8080/tcp, 2049/tcp, 24007/tcp, 49152/tcp, 49162/tcp, 49156/tcp, 6010/tcp, 111/tcp, 49154/tcp, 443/tcp, 49160/tcp, 38468/tcp, 49159/tcp, 245/tcp, 49153/tcp, 6012/tcp, 38469/tcp, 6011/tcp, 38465/tcp, 0.0.0.0:49153->22/tcp angry_morse

Note the Container ID of the image and inspect the image to get the IP address. Say the Container ID of the image is d273cc739c9d , so to get the IP do:

$ docker inspect d273cc739c9d
“GlobalIPv6Address”: “”,
“GlobalIPv6PrefixLen”: 0,
“IPAddress”: “172.17.0.2”,
“IPPrefixLen”: 16,
“IPv6Gateway”: “”,
“LinkLocalIPv6Address”: “fe80::42:acff:fe11:2”,
“LinkLocalIPv6PrefixLen”: 64,
The IP address is “172.17.0.2”
Once we have got the IP, ssh into the container:

$ ssh root@IP address
The password will be ‘password’ as specified in the dockerfile. Make sure the password is changed immediately.

[ ~]# ssh root@172.17.0.2
root@172.17.0.2's password:
System is booting up. See pam_nologin(8)
Last login: Mon May 4 06:22:34 2015 from 172.17.42.1
-bash-4.3# ps aux |grep glusterd
root 34 0.0 0.0 448092 15800 ? Ssl 06:01 0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
root 159 0.0 0.0 112992 2224 pts/0 S+ 06:22 0:00 grep --color=auto glusterd
-bash-4.3# gluster peer status
Number of Peers: 0
-bash-4.3# gluster --version
glusterfs 3.6.3 built on Apr 23 2015 16:12:34
Repository revision: git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc.
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
-bash-4.3#

Note:
If you want to keep the glusterfs configurations persistent then you have to create directories (/etc/glusterfs, /var/lib/glusterd, /var/log/glusterfs and appropriate bricks for the volume /brick1) and bind mount it to the container.
For example:

$ docker run --privileged -p 22 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs image name

:z is to set the selinux label with the bind mount on the go.

That’s it!