Ceph Storage Cluster: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
(Created page with "== Ceph Object Storage == == Installing a Ceph Cluster == == RBD Images == == CephFS == === Creating a CephFS === === Mounting a Ceph FS === Mounting a ceph filesystem on...")
 
mNo edit summary
Line 2: Line 2:


== Installing a Ceph Cluster ==
== Installing a Ceph Cluster ==
=== Single Host Operation ===
OOTB, ceph required replication to be across hosts, not just devices.  For a single node cluster, this can be problematic.  The following steps will add a new rule that will allow replication across OSDs instead of hosts:
#
# commands to set ceph to handle replication on one node
#
# create new crush rule allowing OSD-level replication
# ceph osd crush rule create-replicated <rulename> <root> <level>
ceph osd crush rule create-replicated osd_replication default osd
# verify that rule exists and is correct
ceph osd crush rule ls
ceph osd crush rule dump
# set replication level on existing pools
ceph osd pool set device_health_metrics size 3
# apply new rule to existing pools
ceph osd pool set device_health_metrics crush_rule osd_replication


== RBD Images ==
== RBD Images ==

Revision as of 02:27, 5 July 2020

Ceph Object Storage

Installing a Ceph Cluster

Single Host Operation

OOTB, ceph required replication to be across hosts, not just devices. For a single node cluster, this can be problematic. The following steps will add a new rule that will allow replication across OSDs instead of hosts:

#
# commands to set ceph to handle replication on one node
#
# create new crush rule allowing OSD-level replication
# ceph osd crush rule create-replicated <rulename> <root> <level>
ceph osd crush rule create-replicated osd_replication default osd

# verify that rule exists and is correct
ceph osd crush rule ls
ceph osd crush rule dump

# set replication level on existing pools
ceph osd pool set device_health_metrics size 3 

# apply new rule to existing pools
ceph osd pool set device_health_metrics crush_rule osd_replication


RBD Images

CephFS

Creating a CephFS

Mounting a Ceph FS

Mounting a ceph filesystem on a system outside the storage cluster requires four things:

  1. the ceph.conf file from the /etc/ceph directory on a cluster node
  2. a keyring created on the ceph master node for client authentication
  3. the 'mount.ceph' mount helper, available in the 'ceph-common' package
  4. an entry in the /etc/fstab file

/etc/fstab

This line will mount the Ceph FS on boot:

:/                      /<mountpoint>      ceph    name=devcluster 0 0