Editing
Rook Storage for Kubernetes
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Creating an internal Ceph Cluster === The reliability of creating a ceph cluster within the kubernetes space has imporved considerably -- and given that Rook is obviously biased AGAINST using external clusters these days (see above), this is really the only option if you want to use Rook to provision storage for Kubernetes. That said ... See the instructions in the [[Ceph_Storage_Cluster#Adding_Disks|Ceph Adding Disks]] page for how to prepare disks to be included in the cluster ... it isn't easy. One thing that needs to be done on the hosts where the disks reside is to install the '''''lvm2''''' package (it may or may not be automatically installed ... but it needs to be there). In the same directory as the operator manifest, there is a '''''cluster.yaml''''' that will create a ceph cluster within the kubernetes cluster, but it has one major problem: it is configured to simply grab all available storage from every node in the kubernetes cluster. Not always what you want ... so you need to copy the file into a local directory and modify the 'storage' section. The storage section for the development cluster is shown here as an example: storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false #deviceFilter: config: # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore. # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB # journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller # osdsPerDevice: "1" # this value can be overridden at the node or device level # encryptedDevice: "true" # the default value for this option is "false" # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. nodes: - name: "storage1" devices: # specific devices to use for storage can be specified for each node - name: "sdc" - name: "sdd" - name: "sde" - name: "sdf" - name: "sdg" #- name: "nvme01" # multiple osds can be created on high performance devices # config: # osdsPerDevice: "5" #- name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths #config: # configuration can be specified at the node level which overrides the cluster level config - name: "controller" devices: - name: "sdb" After editing a local copy of the '''''cluster.yaml''''' file, apply it like normal ... kubectl apply -f cluster.yaml It will take a while, it will create a zillion pods, but you'll end up with a cluster ... assuming that it liked your storage. If you have to try again to get storage to connect, all you need to do is slightly modify the '''''cluster.yaml''''' file and re-apply it. That will cause the operator to refresh the cluster and try again to assimilate the storage. To get a ceph dashboard, you need to make sure it is enabled in the '''''cluster.yaml''' file (it is by default) and install a service to make it accessible. multiple options are provided in the distribution directory, but the simplest (if you have an [[IP Controller]] installed is to use a loadBalancer: kubectl create -f dashboard-loadbalancer.yaml Since everything is contained in the kubernetes cluster, there is no external interface to the cluster for control/management ... so they provide the 'rook-toolbox' deployment that will allow you to exec into the resulting pod to get the familiar 'ceph' and other related utilities. This can stay alive as long as you want (forever), and comes in a 'rook-toolbox-job' variant to allow automation of activities. Installing the toolbox is as simple as: kubectl create -f toolbox.yaml ... and then exec into it (or access through lens/k8dash or other means): kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash If you do not have storage on 3 or more hosts, you need to reset the default replication failureDomain in the cluster itself to keep ceph from complaining about it. Go into the rook toolbox and issue the same commands as for a standalone ceph cluster located [[Ceph_Storage_Cluster#Single_Host_Operation|here]]. Now you can skip down to creating storage classes ... and other configuration activities.
Summary:
Please note that all contributions to WilliamsNet Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
WilliamsNet Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Navigation
Commons
Architecture
How-To
Systems
Hardware
SysAdmin
Kubernetes
OpenSearch
Special
Pages to create
All pages
Recent changes
Random page
Help about MediaWiki
Formatting Help
Tools
What links here
Related changes
Special pages
Page information