Ceph-CSI Provisioner for Kubernetes: Difference between revisions
DrEdWilliams (talk | contribs) (Created page with "Step 1: Deploy Ceph Provisioner on Kubernetes Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisi...") |
DrEdWilliams (talk | contribs) mNo edit summary |
||
| Line 1: | Line 1: | ||
Step 1: Deploy Ceph Provisioner on Kubernetes | == Step 1: Deploy Ceph Provisioner on Kubernetes == | ||
Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+. | Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+. | ||
$ vim ceph-rbd-provisioner.yml | $ vim ceph-rbd-provisioner.yml | ||
Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Ceph RBD provisioner. | Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Ceph RBD provisioner. | ||
--- | <pre>--- | ||
kind: ClusterRole | kind: ClusterRole | ||
apiVersion: rbac.authorization.k8s.io/v1 | apiVersion: rbac.authorization.k8s.io/v1 | ||
| Line 99: | Line 100: | ||
- name: PROVISIONER_NAME | - name: PROVISIONER_NAME | ||
value: ceph.com/rbd | value: ceph.com/rbd | ||
serviceAccount: rbd-provisioner | serviceAccount: rbd-provisioner</pre> | ||
Apply the file to create the resources. | Apply the file to create the resources. | ||
$ kubectl apply -f ceph-rbd-provisioner.yml | $ kubectl apply -f ceph-rbd-provisioner.yml | ||
clusterrole.rbac.authorization.k8s.io/rbd-provisioner created | clusterrole.rbac.authorization.k8s.io/rbd-provisioner created | ||
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created | clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created | ||
role.rbac.authorization.k8s.io/rbd-provisioner created | role.rbac.authorization.k8s.io/rbd-provisioner created | ||
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created | rolebinding.rbac.authorization.k8s.io/rbd-provisioner created | ||
deployment.apps/rbd-provisioner created | deployment.apps/rbd-provisioner created | ||
Confirm that RBD volume provisioner pod is running. | Confirm that RBD volume provisioner pod is running. | ||
$ kubectl get pods -l app=rbd-provisioner -n kube-system | $ kubectl get pods -l app=rbd-provisioner -n kube-system | ||
NAME READY STATUS RESTARTS AGE | NAME READY STATUS RESTARTS AGE | ||
rbd-provisioner-75b85f85bd-p9b8c 1/1 Running 0 3m45s | rbd-provisioner-75b85f85bd-p9b8c 1/1 Running 0 3m45s | ||
Step 2: Get Ceph Admin Key and create Secret on Kubernetes | |||
== Step 2: Get Ceph Admin Key and create Secret on Kubernetes == | |||
Login to your Ceph Cluster and get the admin key for use by RBD provisioner. | Login to your Ceph Cluster and get the admin key for use by RBD provisioner. | ||
$ sudo ceph auth get-key client.admin | $ sudo ceph auth get-key client.admin | ||
Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes. | Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes. | ||
$ kubectl create secret generic ceph-admin-secret \ | $ kubectl create secret generic ceph-admin-secret \ | ||
--type="kubernetes.io/rbd" \ | |||
--from-literal=key='<key-value>' \ | |||
--namespace=kube-system | |||
Where <key-value> is your ceph admin key. You can confirm creation with the command below. | Where <key-value> is your ceph admin key. You can confirm creation with the command below. | ||
$ kubectl get secrets ceph-admin-secret -n kube-system | $ kubectl get secrets ceph-admin-secret -n kube-system | ||
NAME TYPE DATA AGE | NAME TYPE DATA AGE | ||
ceph-admin-secret kubernetes.io/rbd 1 5m | ceph-admin-secret kubernetes.io/rbd 1 5m | ||
Step 3: Create Ceph pool for Kubernetes & client key | |||
== Step 3: Create Ceph pool for Kubernetes & client key == | |||
Next is to create a new Ceph Pool for Kubernetes. | Next is to create a new Ceph Pool for Kubernetes. | ||
$ sudo ceph ceph osd pool create <pool-name> <pg-number> | $ sudo ceph ceph osd pool create <pool-name> <pg-number> | ||
Example: | |||
$ sudo ceph ceph osd pool create k8s 100 | |||
Then create a new client key with access to the pool created. | Then create a new client key with access to the pool created. | ||
$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=<pool-name>' | $ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=<pool-name>' | ||
Example | |||
$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=k8s' | |||
Where k8s is the name of pool created in Ceph. | Where k8s is the name of pool created in Ceph. | ||
You can then associate the pool with an application and initialize it. | You can then associate the pool with an application and initialize it. | ||
sudo ceph osd pool application enable <pool-name> rbd | sudo ceph osd pool application enable <pool-name> rbd | ||
sudo rbd pool init <pool-name> | sudo rbd pool init <pool-name> | ||
Get the client key on Ceph. | Get the client key on Ceph. | ||
$ sudo ceph auth get-key client.kube | $ sudo ceph auth get-key client.kube | ||
Create client secret on Kubernetes | Create client secret on Kubernetes | ||
kubectl create secret generic ceph-k8s-secret \ | kubectl create secret generic ceph-k8s-secret \ | ||
--type="kubernetes.io/rbd" \ | |||
--from-literal=key='<key-value>' \ | |||
--namespace=kube-system | |||
Where <key-value> is your Ceph client key. | Where <key-value> is your Ceph client key. | ||
Step 4: Create a RBD Storage Class | == Step 4: Create a RBD Storage Class == | ||
A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called ceph-rbd. | A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called ceph-rbd. | ||
$ vim ceph-rbd-sc.yml | $ vim ceph-rbd-sc.yml | ||
The contents to be added to file: | The contents to be added to file: | ||
--- | <pre>--- | ||
kind: StorageClass | kind: StorageClass | ||
apiVersion: storage.k8s.io/v1 | apiVersion: storage.k8s.io/v1 | ||
| Line 181: | Line 194: | ||
userSecretName: ceph-k8s-secret | userSecretName: ceph-k8s-secret | ||
imageFormat: "2" | imageFormat: "2" | ||
imageFeatures: layering | imageFeatures: layering</pre> | ||
Where: | Where: | ||
ceph-rbd is the name of the StorageClass to be created. | * ceph-rbd is the name of the StorageClass to be created. | ||
10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors. You can list them with the command: | * 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors. | ||
$ sudo ceph -s | |||
You can list them with the command: | |||
<pre>$ sudo ceph -s | |||
cluster: | cluster: | ||
id: 7795990b-7c8c-43f4-b648-d284ef2a0aba | id: 7795990b-7c8c-43f4-b648-d284ef2a0aba | ||
| Line 202: | Line 218: | ||
objects: 250 objects, 76 KiB | objects: 250 objects, 76 KiB | ||
usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail | usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail | ||
pgs: 618 active+clean | pgs: 618 active+clean</pre> | ||
After modifying the file with correct values of Ceph monitors, apply config: | After modifying the file with correct values of Ceph monitors, apply config: | ||
$ kubectl apply -f ceph-rbd-sc.yml | $ kubectl apply -f ceph-rbd-sc.yml | ||
storageclass.storage.k8s.io/ceph-rbd created | storageclass.storage.k8s.io/ceph-rbd created | ||
List available StorageClasses: | List available StorageClasses: | ||
kubectl get sc | kubectl get sc | ||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE | NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE | ||
ceph-rbd ceph.com/rbd Delete Immediate false 17s | ceph-rbd ceph.com/rbd Delete Immediate false 17s | ||
cephfs ceph.com/cephfs Delete Immediate false 18d | cephfs ceph.com/cephfs Delete Immediate false 18d | ||
Step 5: Create a test Claim and Pod on Kubernetes | |||
== Step 5: Create a test Claim and Pod on Kubernetes == | |||
To confirm everything is working, let’s create a test persistent volume claim. | To confirm everything is working, let’s create a test persistent volume claim. | ||
$ vim ceph-rbd-claim.yml | $ vim ceph-rbd-claim.yml | ||
kind: PersistentVolumeClaim | |||
<pre>kind: PersistentVolumeClaim | |||
apiVersion: v1 | apiVersion: v1 | ||
metadata: | metadata: | ||
| Line 227: | Line 247: | ||
resources: | resources: | ||
requests: | requests: | ||
storage: 1Gi | storage: 1Gi</pre> | ||
Apply manifest file to create claim. | Apply manifest file to create claim. | ||
$ kubectl apply -f ceph-rbd-claim.yml | $ kubectl apply -f ceph-rbd-claim.yml | ||
persistentvolumeclaim/ceph-rbd-claim1 created | persistentvolumeclaim/ceph-rbd-claim1 created | ||
If it was successful in binding, it should show Bound status. | If it was successful in binding, it should show Bound status. | ||
$ kubectl get pvc | $ kubectl get pvc | ||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | ||
ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 43s | ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 43s | ||
Nice!.. We are able create dynamic Persistent Volume Claims on Ceph RBD backend. Notice we didn’t have to manually create a Persistent Volume before a Claim. How cool is that?.. | Nice!.. We are able create dynamic Persistent Volume Claims on Ceph RBD backend. Notice we didn’t have to manually create a Persistent Volume before a Claim. How cool is that?.. | ||
We can then deploy a test pod using the claim we created. First create a file to hold the data: | We can then deploy a test pod using the claim we created. First create a file to hold the data: | ||
$ vim rbd-test-pod.yaml | $ vim rbd-test-pod.yaml | ||
Add: | Add: | ||
--- | <pre>--- | ||
kind: Pod | kind: Pod | ||
apiVersion: v1 | apiVersion: v1 | ||
| Line 265: | Line 289: | ||
- name: pvc | - name: pvc | ||
persistentVolumeClaim: | persistentVolumeClaim: | ||
claimName: ceph-rbd-claim1 | claimName: ceph-rbd-claim1 </pre> | ||
Create pod: | Create pod: | ||
$ kubectl apply -f rbd-test-pod.yaml | $ kubectl apply -f rbd-test-pod.yaml | ||
pod/rbd-test-pod created | pod/rbd-test-pod created | ||
If you describe the Pod, you’ll see successful attachment of the Volume. | If you describe the Pod, you’ll see successful attachment of the Volume. | ||
$ kubectl describe pod rbd-test-pod | $ kubectl describe pod rbd-test-pod | ||
..... | ..... | ||
Events: | |||
Type Reason Age From Message | |||
---- ------ ---- ---- ------- | |||
Normal Scheduled <unknown> default-scheduler Successfully assigned default/rbd-test-pod to rke-worker-02 | |||
Normal SuccessfulAttachVolume 3s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6f4399d-43cf-4fc1-ba14- cc22f5c85304" | |||
If you have Ceph Dashboard, you can see a new block image created. | If you have Ceph Dashboard, you can see a new block image created. | ||
Revision as of 19:48, 7 May 2021
Step 1: Deploy Ceph Provisioner on Kubernetes
Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+.
$ vim ceph-rbd-provisioner.yml
Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Ceph RBD provisioner.
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
Apply the file to create the resources.
$ kubectl apply -f ceph-rbd-provisioner.yml clusterrole.rbac.authorization.k8s.io/rbd-provisioner created clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created role.rbac.authorization.k8s.io/rbd-provisioner created rolebinding.rbac.authorization.k8s.io/rbd-provisioner created deployment.apps/rbd-provisioner created
Confirm that RBD volume provisioner pod is running.
$ kubectl get pods -l app=rbd-provisioner -n kube-system NAME READY STATUS RESTARTS AGE rbd-provisioner-75b85f85bd-p9b8c 1/1 Running 0 3m45s
Step 2: Get Ceph Admin Key and create Secret on Kubernetes
Login to your Ceph Cluster and get the admin key for use by RBD provisioner.
$ sudo ceph auth get-key client.admin
Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes.
$ kubectl create secret generic ceph-admin-secret \
--type="kubernetes.io/rbd" \
--from-literal=key='<key-value>' \
--namespace=kube-system
Where <key-value> is your ceph admin key. You can confirm creation with the command below.
$ kubectl get secrets ceph-admin-secret -n kube-system NAME TYPE DATA AGE ceph-admin-secret kubernetes.io/rbd 1 5m
Step 3: Create Ceph pool for Kubernetes & client key
Next is to create a new Ceph Pool for Kubernetes.
$ sudo ceph ceph osd pool create <pool-name> <pg-number>
Example:
$ sudo ceph ceph osd pool create k8s 100
Then create a new client key with access to the pool created.
$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=<pool-name>'
Example
$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=k8s'
Where k8s is the name of pool created in Ceph.
You can then associate the pool with an application and initialize it.
sudo ceph osd pool application enable <pool-name> rbd sudo rbd pool init <pool-name>
Get the client key on Ceph.
$ sudo ceph auth get-key client.kube
Create client secret on Kubernetes
kubectl create secret generic ceph-k8s-secret \ --type="kubernetes.io/rbd" \ --from-literal=key='<key-value>' \ --namespace=kube-system
Where <key-value> is your Ceph client key.
Step 4: Create a RBD Storage Class
A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called ceph-rbd.
$ vim ceph-rbd-sc.yml
The contents to be added to file:
--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ceph-rbd provisioner: ceph.com/rbd parameters: monitors: 10.10.10.11:6789, 10.10.10.12:6789, 10.10.10.13:6789 pool: k8s-uat adminId: admin adminSecretNamespace: kube-system adminSecretName: ceph-admin-secret userId: kube userSecretNamespace: kube-system userSecretName: ceph-k8s-secret imageFormat: "2" imageFeatures: layering
Where:
- ceph-rbd is the name of the StorageClass to be created.
- 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors.
You can list them with the command:
$ sudo ceph -s
cluster:
id: 7795990b-7c8c-43f4-b648-d284ef2a0aba
health: HEALTH_OK
services:
mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h)
mgr: cephmon01(active, since 30h), standbys: cephmon02
mds: cephfs:1 {0=cephmon01=up:active} 1 up:standby
osd: 9 osds: 9 up (since 32h), 9 in (since 32h)
rgw: 3 daemons active (cephmon01, cephmon02, cephmon03)
data:
pools: 8 pools, 618 pgs
objects: 250 objects, 76 KiB
usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail
pgs: 618 active+clean
After modifying the file with correct values of Ceph monitors, apply config:
$ kubectl apply -f ceph-rbd-sc.yml storageclass.storage.k8s.io/ceph-rbd created
List available StorageClasses:
kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd ceph.com/rbd Delete Immediate false 17s cephfs ceph.com/cephfs Delete Immediate false 18d
Step 5: Create a test Claim and Pod on Kubernetes
To confirm everything is working, let’s create a test persistent volume claim.
$ vim ceph-rbd-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rbd-claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi
Apply manifest file to create claim.
$ kubectl apply -f ceph-rbd-claim.yml persistentvolumeclaim/ceph-rbd-claim1 created
If it was successful in binding, it should show Bound status.
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 43s
Nice!.. We are able create dynamic Persistent Volume Claims on Ceph RBD backend. Notice we didn’t have to manually create a Persistent Volume before a Claim. How cool is that?..
We can then deploy a test pod using the claim we created. First create a file to hold the data:
$ vim rbd-test-pod.yaml
Add:
---
kind: Pod
apiVersion: v1
metadata:
name: rbd-test-pod
spec:
containers:
- name: rbd-test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/RBD-SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ceph-rbd-claim1
Create pod:
$ kubectl apply -f rbd-test-pod.yaml pod/rbd-test-pod created
If you describe the Pod, you’ll see successful attachment of the Volume.
$ kubectl describe pod rbd-test-pod ..... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned default/rbd-test-pod to rke-worker-02 Normal SuccessfulAttachVolume 3s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6f4399d-43cf-4fc1-ba14- cc22f5c85304"
If you have Ceph Dashboard, you can see a new block image created.