Editing
Ceph Storage Cluster
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Cluster Concepts and Operations == === CRUSH Rules and Replication === === Pools === === RBD Images === === CephFS === ==== Creating a CephFS ==== ==== Mounting a Ceph FS ==== Mounting a ceph filesystem on a system outside the storage cluster requires three things: # The 'mount.ceph' mount helper, available in the 'ceph-common' package # The client keyring created on the ceph master node for client authentication # An entry in the /etc/fstab file ===== Mount Helper ===== All you need is the 'mount.ceph' executable, but there is no way to just get that file. So, you have to load the ceph common application bundle, which results in a bunch of dependencies that nobody will ever need: sudo yum install -y ceph-common Obviously, you will need to set up the repository as described above depending on your OS ... and specifically the instructions for using the 'updates-testing' repository on Fedora if needed. ===== Client keyring ===== While the admin keyring/credentials could be used, for obvious reasons a separate user should be created for mounting the Ceph FS. While it is possible to create a separate user for each client system, there is no need to go to that level of paranoia. The keyring must be created on a system with admin access to the cluster (generally the master node) and then copied to the client system. Go to the master node to create the authentication token, selecting something appropriate for the username: ceph fs authorize <filesystem> client.<username> / rw > /etc/ceph/ceph.client.<username>.keyring This same keyring file can then be copied over to each client system without recreating it. ssh <client> mkdir /etc/ceph scp /etc/ceph/ceph.conf <client>:/etc/ceph scp /etc/ceph/ceph.client.<username>.keyring <client>:/etc/ceph ===== /etc/fstab ===== This line will mount the Ceph FS on boot: <master IP>:/ /<mountpoint> ceph name=<username>,_netdev 0 0 At this point, simply mount the filesystem as normal to start using it immediately: sudo mount /<mountpoint> === Object Gateway === Ceph implements an S3 compatible object store called the Ceph Object Gateway or RADOS. The documentation on how to set this up is very fragmented and still includes old data from previous version, so here is a cheat sheet. The links to the official documentation include: * https://docs.ceph.com/en/latest/cephadm/rgw/ * https://docs.ceph.com/en/latest/radosgw/ Obviously, a working ceph cluster is required; in fact, the cluster must be 'healthy' ... or it won't even start the RADOS gateway. The ceph object gateway uses a new set of services to implement the API -- the number of service instances required is dependent on the system load and the throughput required. The documentation recommends three with a load balancer in front (external to the cluster), but one is sufficient for low-level access. Multiple gateway nodes can also be used with each node servicing a specific 'customer'. First thing to do is designate which node(s) will support an object gateway: ceph orch host label add <host> rgw The label can be anything, but 'rgw' does make sense ... repeat this for all hosts that you want to run gateways. Now you can actually start the rgw service: ceph orch apply rgw <servicename> --port=<port> --placement=label:rgw For the test cluster, this looked like: ceph orch apply rgw test-rgw --port=8088 --placement=label:rgw If the rgw service(s) do not start, you may need to force the issue ... ceph orch daemon add rgw test-rgw port:8088 <host> Note the slightly different format of the parameters for port and placement. In order to access the S3 service, create a user: radosgw-admin user create --uid=<user> --display-name="Full Name" --email=user@example.com --system This will spit out a big blob of JSON, but the two bits of info you should capture from ta=hat are the 'access_key' and 'secret_key' you'll need them to set up the dashboard and to access the S3 service. If you lose track of them, you can always retrieve them later: radosgw-admin user info --uid=<user> The dashboard has a section to display info about the object gateway, but you must provide it the access keys for the user you just created (since it is a different user in ceph than the user running the dashboard: echo -n "<access-key>" > <file-containing-access-key> echo -n "<secret-key>" > <file-containing-secret-key> ceph dashboard set-rgw-api-access-key -i <file-containing-access-key> ceph dashboard set-rgw-api-secret-key -i <file-containing-secret-key> (see https://docs.ceph.com/en/latest/mgr/dashboard/#enabling-the-object-gateway-management-frontend for more detail) Yes, this is a very manual process, but it only needs to be done once for the cluster. You will also need to load the access and secret key into any other application that is to use the S3 API (such as the [https://docs.min.io/docs/minio-client-complete-guide minio client]).
Summary:
Please note that all contributions to WilliamsNet Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
WilliamsNet Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Navigation
Commons
Architecture
How-To
Systems
Hardware
SysAdmin
Kubernetes
OpenSearch
Special
Pages to create
All pages
Recent changes
Random page
Help about MediaWiki
Formatting Help
Tools
What links here
Related changes
Special pages
Page information