I have recently published a blog on how to deploy Ceph Cluster in a kube setup. If you don’t have this cluster up and running please refer this article.
For this attempt we need below components/software deployed successfully in a setup.
Kubernetes
Ceph cluster
Ceph CSI driver
The first two deployments ( Kubernetes cluster and Ceph cluster) parts are covered in the above-linked article, so I am assuming you are with me and you have ceph cluster deployed in a Kubernetes cluster. If that’s the case let us move on and deploy CSI driver in it.
Before that, just verify you have the pods running successfully in your cluster as shown below:
To get the list of pods deployed as part of Rook operator:
[terminal]
$ kube get po -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-8649f78d9b-czk7q 1/1 Running 0 2m55s
rook-ceph-mon-a-685b498fdf-9cnql 1/1 Running 0 4m52s
rook-ceph-mon-b-697bb67496-nnsmg 1/1 Running 0 4m5s
rook-ceph-mon-c-75b9f454f7-28dxz 1/1 Running 0 3m18s
rook-ceph-osd-0-85c55b5fb-4m7l5 1/1 Running 0 2m7s
rook-ceph-osd-1-ff5c9df9d-dtqcr 1/1 Running 0 2m5s
rook-ceph-osd-2-5f9d7f784f-m6zdg 1/1 Running 0 2m6s
rook-ceph-osd-prepare-kube1-s4w8d 0/2 Completed 0 2m26s
rook-ceph-osd-prepare-kube2-xfbnw 0/2 Completed 1 2m26s
rook-ceph-osd-prepare-kube3-2nvvm 0/2 Completed 0 2m26s
[/terminal]
NOTE:
Some of the pods are in the completed state – those are the pods that completed while executing a `job` action in Kubernetes.
if you want to customize more things in this deployment, please follow [quickguide](https://github.com/rook/rook/blob/master/Documentation/ceph-quickstart.md)
cool, now we are in our last step, that is nothing but deploying Ceph CSI driver and playing with PVCs.
Deploy CEPH-CSI driver
[terminal]
$ git clone https://github.com/ceph/ceph-csi.git
$ cd ceph-csi/deploy/cephfs/kubernetes
$ kube create -f csi-attacher-rbac.yaml
serviceaccount/csi-attacher created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
$ kube create -f csi-cephfsplugin-attacher.yaml
service/csi-cephfsplugin-attacher created
statefulset.apps/csi-cephfsplugin-attacher created
$ kube create -f csi-cephfsplugin-provisioner.yaml
service/csi-cephfsplugin-provisioner created
statefulset.apps/csi-cephfsplugin-provisioner created
$ kube create -f csi-cephfsplugin.yaml
daemonset.apps/csi-cephfsplugin created
$ kube create -f csi-nodeplugin-rbac.yaml
serviceaccount/csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin created
$ kube create -f csi-provisioner-rbac.yaml
serviceaccount/csi-provisioner created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
[/terminal]
NOTE:
You can also use the script to deploy csi plugin in `ceph-csi/examples/cephfs/plugin-deploy.sh`
Get the list of ceph CSI pods
[terminal]
$ kube get po
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-964ts 2/2 Running 0 64s
csi-cephfsplugin-attacher-0 1/1 Running 0 101s
csi-cephfsplugin-provisioner-0 1/1 Running 0 43s
csi-cephfsplugin-ptpwd 2/2 Running 0 64s
csi-cephfsplugin-wct7v 2/2 Running 0 64s
[/terminal]
We also to need to create the storage class and secret to create the PVC
[terminal]
$cd ceph-csi/examples/cephfs
[/terminal]
To create the storage class we need to get the mon server details from the CEPH cluster, to get the list of `mon` services:
[terminal]
$ kube get service -n rook-ceph
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mon-a ClusterIP 10.233.57.179
rook-ceph-mon-b ClusterIP 10.233.21.27
rook-ceph-mon-c ClusterIP 10.233.42.117
[/terminal]
To connect to the mon service, the service URL looks like
[terminal]
service-name.namespace.svc.local.cluster:6790
[/terminal]
We need to get the pool in which the volume needs to be created, get the list of ceph pods
[terminal]
$kubectl get pods -n rook-ceph
$kubectl exec -it rook-ceph-tools-76c7d559b6-qp7l4 /bin/bash
$ceph osd lspools
1 myfs-metadata
2 myfs-data0
3 replicapool
[/terminal]
Use any one of the pools in storageclass.yaml. Let us create a storage class with the above pool information in it.
[terminal]
$ kube create -f storageclass.yaml
storageclass.storage.k8s.io/csi-cephfs created
[/terminal]
Create a secret to access CEPH cluster. To create the secret we need to have `adminID, adminKey` which is base64 encoded values. `adminID` is nothing but the base64 encoded value of string `admin`.
[terminal]
$echo -n “admin”|base64
YWRtaW4=
[/terminal]
To get the `adminkey` we need to get inside the tool-box pod we have created.
[terminal]
$ kube exec -it rook-ceph-tools-76c7d559b6-7lwxz /bin/bash -nrook-ceph
[root@toolbox /]# ceph auth get-key client.admin|base64
QVFETUdraGNnSXBOTGhBQWdXYlU4YVJkbTFuTm85WjMzZjVnYUE9PQ==
[/terminal]
Replace `adminID` and `adminKey` in secret.yaml of `examples` folder.
For ex:
[terminal]
adminID: YWRtaW4=
adminKey: QVFETUdraGNnSXBOTGhBQWdXYlU4YVJkbTFuTm85WjMzZjVnYUE9PQ==
[/terminal]
Let us create the secret after changing the `admin` and `adminkey` fields.
[terminal]
#kube create -f secret.yaml
secret/csi-cephfs-secret created
[/terminal]
Now we have the kubernetes+CEPH cluster+ ceph CSI driver deployed! let’s create the persistent volume claim (PVC) and bind it to an application pod.
[terminal]
$ kube create -f pvc.yaml
persistentvolumeclaim/csi-cephfs-pvc created
[/terminal]
Verify PVC is created
[terminal]
$ kube get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-9bb8fa5b-1ee7-11e9-b238-52540074f44d 5Gi RWX csi-cephfs 13s
[/terminal]
Bind PVC to an app
[terminal]
$ kube create -f pod.yaml
pod/csicephfs-demo-pod created
[/terminal]
Verify application pod is up and running
[terminal]
$ kube get po
NAME READY STATUS RESTARTS AGE
csicephfs-demo-pod 1/1 Running 0 32s
[/terminal]
That’s it !! It’s your turn now. 🙂
Please let me know if you have any comments/questions on this deployment.
Copyright secured by Digiprove © 2019-2020 Humble Chirammal