Rook Storage for Kubernetes
Background
Installation Process
The scripts and manifests required to install Rook and Ceph (if needed) are located in the Rook repository that should be cloned for local access:
cd <working directory> git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph
A script that accomplishes all the installation steps is located in the /shared/ewilliam/k8s-admin directory and the corresponding GitLab project repository.
The installation process is divided into two parts: Installing the Rook Operator and then either installing a Ceph Cluster in the kubernetes cluster or connecting with an existing Ceph Cluster that has been installed previously.
Rook Operator
Creating the Rook Operator simply requires loading two manifests with no customization. First, set up the namespace and all the roles, bindings, and support definitions:
kubectl create -f common.yaml
Then create the rook operator itself and wait for it to settle into a Running state.
kubectl create -f operator.yaml
The Rook Operator manages the entire activity of the storage enterprise, so tailing the log in a separate window will be useful should any problems arise:
kubectl get pods -n rook-ceph kubectl logs -f -n rook-ceph <pod name>
The beauty of the operator concept in kubernetes is that it is capable of accomplishing practically anything that can be done from the command line -- all in response to what it sees in the cluster configuration. In this case, the operator looks for the Custom Resource Definition (CRD) that defines a Ceph Cluster -- with the parameters set to define what that ceph cluster looks like. If it sees that a new cluster needs to be created, it will do all the steps needed to create and provision the ceph cluster as specified. If interfacing with an external ceph cluster is required, it takes the provided credentials and identifiers and connects to that cluster to get the storage service.
Creating an internal Ceph Cluster
TBD
Using Existing Ceph Cluster
Though not absolutely required, it is recommended to use a separate namespace for the external cluster. The default scripts and manifests assume you will do this, so it is easier to just go with the flow ... the namespace in the provided manifests is 'rook-ceph-external'
As with the rook operator, support roles, role bindings, and such need to be created along with the actual separate namespace:
kubectl create -f common-external.yaml
The Rook Cluster definition needs authentication and identification data about the external cluster; this is loaded into kubernetes secrets with standard names so that the operator and storage provisioner can access the external ceph cluster. The data can be obtained from the ceph master node:
- Cluster FSID -- run this command and copy the results:
ceph fsid
- Cluster Monitor address -- located in the /etc/ceph/ceph.conf file
- Cluster admin secret -- located in the /etc/ceph/ceph.client.admin.keyring
Put the information into environment variables as shown below, then run the script to create the secrets:
export NAMESPACE=rook-ceph-external export ROOK_EXTERNAL_FSID=dc852252-bd6b-11ea-b7f2-503eaa02062c export ROOK_EXTERNAL_CEPH_MON_DATA=storage1=10.1.0.9:6789 export ROOK_EXTERNAL_ADMIN_SECRET=AQAclf9e0ptoMBAAracpRwXomJ6LgiO6L8wqfw== bash ./import-external-cluster.sh
Note that the above script adds too many secrets -- the operator tries to create them again when building the cluster interface -- and errors out since they can't be changed. We need to either edit the script to not create the excess secrets or delete the ones that aren't needed see which secrets are in the namespace for the external cluster. For now, we will delete them. First find all the ones that are present in the new namespace:
kubectl get secret -n rook-ceph-external
These are the ones that need to be deleted (at least for now):
kubectl -n rook-ceph-external delete secret \ rook-csi-cephfs-node rook-csi-cephfs-provisioner rook-csi-rbd-node rook-csi-rbd-provisioner
Watch the operator log as you create the cluster below to see if any additional secrets need to be deleted. Now we can create the cluster definition that the operator will use to create our interface:
kubectl create -f cluster-external-management.yaml
Next we create the StorageClass that kubernetes will use to request RBDs (block images) from the ceph cluster. The Pool Resource Definition will create a new ceph pool if the specified pool doesn't exist already, and sets the replication and placement parameters for that pool. It also creates StorageClass entry which provides the specific access parameters to the cluster. This is the only file that requires modification prior to adding to the cluster:
- change 'failureDomain' in the pool definition from 'host' to 'osd' ... this will enable replication within a host, which is necessary for a small cluster
- change the 'name' in the pool definition and the corresponding 'pool' in the storage class to something descriptive -- this will be the name of the pool that is created in the ceph cluster. The base for this file is located further down in the tree ...
cd csi/rbd kubectl create -f storageclass.yaml
At this point, the Storage Provisioner is ready.
Testing
The rook repository has some test manifests to quickly validate the successful implementation of Rook: a Wordpress installation using two deployments.
Deploy the test application; each deployment requests its own Persistent Volume. The wordpress service is a LoadBalancer, so it will allocate an IP address that should give direct access to the wordpress instance. The test manifests are in the kubernetes directory:
cd rook/cluster/examples/kubernetes
kubectl create -f mysql.yaml kubectl create -f wordpress.yaml