Ceph Storage Cluster

From WilliamsNet Wiki
Jump to navigation Jump to search

Ceph Object Storage

Installing a Ceph Cluster

To zap a disk (delete its partition table) in preparation for use with Ceph, execute the following:

ceph-deploy disk zap {osd-server-name} {disk-name}
ceph-deploy disk zap osdserver1 /dev/sdb /dev/sdc

Single Host Operation

OOTB, ceph required replication to be across hosts, not just devices. For a single node cluster, this can be problematic. The following steps will add a new rule that will allow replication across OSDs instead of hosts:

#
# commands to set ceph to handle replication on one node
#
# create new crush rule allowing OSD-level replication
# ceph osd crush rule create-replicated <rulename> <root> <level>
ceph osd crush rule create-replicated osd_replication default osd

# verify that rule exists and is correct
ceph osd crush rule ls
ceph osd crush rule dump

# set replication level on existing pools
ceph osd pool set device_health_metrics size 3 

# apply new rule to existing pools
ceph osd pool set device_health_metrics crush_rule osd_replication

RBD Images

CephFS

Creating a CephFS

Mounting a Ceph FS

Mounting a ceph filesystem on a system outside the storage cluster requires four things:

  1. The master ceph config file (ceph.conf) file from the /etc/ceph directory on any cluster node
  2. The client keyring created on the ceph master node for client authentication
  3. The 'mount.ceph' mount helper, available in the 'ceph-common' package
  4. An entry in the /etc/fstab file

Ceph Config File

This file should simply be copied over to the client system from a node in the storage cluster:

sudo mkdir /etc/ceph
sudo scp <node>:/etc/ceph/ceph.conf /etc/ceph

Permissions should be 644 as this needs to be readable by non-root.

Client keyring

While the admin keyring/credentials could be used, for obvious reasons a separate user should be created for mounting the Ceph FS. While it is possible to create a separate user for each client system, there is no need to go to that level of paranoia. The keyring must be created on a system with admin access to the cluster (generally a cluster node) and then copied to the client system:

sudo ssh <cluster node>
ceph fs authorize <filesystem> client.<username> / rw > /etc/ceph/ceph.client.<username>.keyring
scp /etc/ceph/ceph.client.<username>.keyring <client>:/etc/ceph

This same keyring file can then be copied over to each client system without recreating it.

Mount Helper

All you need is the 'mount.ceph' executable, but there is no way to just get that file. So, you have to load the ceph common application bundle, which results in a bunch of dependencies that nobody will ever need:

sudo yum install -y ceph-common

/etc/fstab

This line will mount the Ceph FS on boot:

:/                      /<mountpoint>      ceph    name=<client id> 0 0

At this point, simply mount the filesystem as normal:

sudo mount /<mountpoint>