Editing
Ceph Storage Cluster
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Ceph Object Storage == As an alternative to a parallel file system (i.e. [https://beegfs.io BeeGFS], GPFS/Spectrum Scale, Lustre, etc.) which stores data as a collection of files, ceph is a system that stores data as a series of objects. One well-known implementation of object storage is the Amazon S3 API; like ceph, it also provides additional layers of abstraction that provide alternative approaches to using the storage system: * Raw Object Storage (think Amazon S3) * Block Storage (think disk images) * Parallel POSIX Filesystem (such as the parallel filesystems mentioned above) Note that Ceph is not a RAID controller and does not implement redundancy or reliability by default in storing objects on disk; it implements data reliability through replication of objects across devices (OSDs) and/or hosts. This replication can be controlled independently for different storage pools that provide the different types of storage listed above. Ceph can be used at many different scales, from a single node with attached storage to large clusters with hundreds of storage devices and dozens of nodes. Its architecture is scalable because it doesn't create bottlenecks in the command, metadata, or data retrieval paths. From a overall system perspective, ceph can be used as a standalone storage platform or integrated into a kubernetes cluster -- or a hybrid approach where the kubernetes cluster uses the externally managed storage cluster for its dynamic storage provisioning needs. In the WilliamsNet environment, ceph is used in different configurations: * A standalone cluster is hosted on storage1, which hosts the development filesystem 'workspace' * The development kubernetes cluster uses Rook to provision storage using the ceph cluster on storage1 as its platform * The production kubernetes cluster hosts its own ceph cluster internal to the kubernetes cluster, providing the 'shared' filesystem and internal kubernetes persistent storage needs.
Summary:
Please note that all contributions to WilliamsNet Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
WilliamsNet Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Navigation
Commons
Architecture
How-To
Systems
Hardware
SysAdmin
Kubernetes
OpenSearch
Special
Pages to create
All pages
Recent changes
Random page
Help about MediaWiki
Formatting Help
Tools
What links here
Related changes
Special pages
Page information