Editing
Rook Storage for Kubernetes
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Using Existing Ceph Cluster === One option provided by the Rook Operator is to interface with an already existing Ceph Cluster. Instructions for deploying a standalone Ceph cluster are included on a [[Ceph Storage Cluster|separate page]]; the operation of that cluster should be completely validated before attempting to connect it to Rook. It should be noted that while earlier versions of Rook (1.3 and before) can successfully interface with external clusters, that capability seems to be lost in newer versions (1.5 and later). The configuration options are there, but from the log messages it is clear that the instructions and the reality of the rook operator are NOT in sync, and even forcing some configuration options did not allow a successful connection with an external cluster. Before you think that you can just pull rook v1.3 out and use it to connect to an external ceph cluster, it seems to have a rather nasty bug where it (seriously) thinks that version 15.2.11 is older than version 15.2.8 -- making it unusable for newer versions of the Ceph Octopus clusters. Moving to Ceph Pacific may (temporarily) solve that problem, but I was not able to get a Pacific cluster to operate ... so ... There is another package that will allow Kubernetes to use an external ceph cluster -- it's part of the ceph distribution called [[Ceph-CSI Provisioner for Kubernetes|Ceph-CSI]]. This is a very much lighter deployment and serves the need very nicely. ==== External cluster instructions -- DEPRECATED, but retained for historical (hysterical) reasons in case Rook gets their act straight ==== Though not absolutely required, it is recommended to use a separate namespace for the external cluster. The default scripts and manifests assume you will do this, so it is easier to just go with the flow ... the namespace in the provided manifests is 'rook-ceph-external'. As with the rook operator, support roles, role bindings, and such need to be created along with the actual separate namespace: kubectl create -f common-external.yaml The Rook Cluster definition needs authentication and identification data about the external cluster; this is loaded into kubernetes secrets with standard names so that the operator and storage provisioner can access the external ceph cluster. The data can be obtained from the ceph master node: * Cluster FSID -- run this command and copy the results: ceph fsid * Cluster Monitor address -- located in the /etc/ceph/ceph.conf file * Cluster admin secret -- located in the /etc/ceph/ceph.client.admin.keyring For convenience, the set of commends has been copied to a script in the k8s-admin repository; multiple copies exist for the 'prod', 'dev' and 'test' clusters. Put the information into environment variables at the top of the script as shown below, then run the script to create the secrets: export NAMESPACE=rook-ceph-external export ROOK_EXTERNAL_FSID=dc852252-bd6b-11ea-b7f2-503eaa02062c export ROOK_EXTERNAL_CEPH_MON_DATA=storage1=10.1.0.9:6789 export ROOK_EXTERNAL_ADMIN_SECRET=AQAclf9e0ptoMBAAracpRwXomJ6LgiO6L8wqfw== bash ./import-external-cluster.sh Note that the above script adds too many secrets -- the operator tries to create them again when building the cluster interface -- and errors out since they can't be changed. We need to either edit the script to not create the excess secrets or delete the ones that aren't needed. For now, we will delete them. First find all the ones that are present in the new namespace: kubectl get secret -n rook-ceph-external These are the ones that need to be deleted (at least for now): kubectl -n rook-ceph-external delete secret \ rook-csi-cephfs-node rook-csi-cephfs-provisioner rook-csi-rbd-node rook-csi-rbd-provisioner Watch the operator log as you create the cluster below to see if any additional secrets need to be deleted. Now we can create the cluster definition that the operator will use to create our interface: kubectl create -f cluster-external-management.yaml
Summary:
Please note that all contributions to WilliamsNet Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
WilliamsNet Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Navigation
Commons
Architecture
How-To
Systems
Hardware
SysAdmin
Kubernetes
OpenSearch
Special
Pages to create
All pages
Recent changes
Random page
Help about MediaWiki
Formatting Help
Tools
What links here
Related changes
Special pages
Page information