PLEASE NOTE: This document applies to an unreleased version of Rook. It is strongly recommended that you only use official releases of Rook, as unreleased versions are subject to changes and incompatibilities that will not be supported in the official releases.
If you are using an official release version of Rook, you should refer to the documentation for your specific version.Documentation for other releases can be found by using the version selector in the bottom left of any doc page.
Ceph Object Store CRD
Rook allows creation and customization of object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object stores.
NOTE This example requires you to have at least 3 bluestore OSDs each on a different node.
This is because the below
erasureCoded chunk settings require at least 3 bluestore OSDs and as
failureDomain setting to
host (default), each OSD needs to be on a different nodes.
apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host erasureCoded: dataChunks: 2 codingChunks: 1 gateway: type: s3 sslCertificateRef: port: 80 securePort: instances: 1 allNodes: false placement: # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: role # operator: In # values: # - rgw-node # tolerations: # - key: rgw-node # operator: Exists # podAffinity: # podAntiAffinity: resources: # limits: # cpu: "500m" # memory: "1024Mi" # requests: # cpu: "500m" # memory: "1024Mi"
Object Store Settings
name: The name of the object store to create, which will be reflected in the pool and other resource names.
namespace: The namespace of the Rook cluster where the object store is created.
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster.
metadataPool: The settings used to create all of the object store metadata pools. Must use replication.
dataPool: The settings to create the object store data pool. Can use replication or erasure coding.
The gateway settings correspond to the RGW daemon settings.
sslCertificateRef: If the certificate is not specified, SSL will not be configured. If specified, this is the name of the Kubernetes secret that contains the SSL certificate to be used for secure connections to the object store. Rook will look in the secret provided at the
certkey name. The value of the
certkey must be in the format expected by the RGW service: “The server key, server certificate, and any other CA or intermediate certificates be supplied in one file. Each of these items must be in pem form.”
port: The port on which the RGW pods and the RGW service will be listening (not encrypted).
securePort: The secure port on which RGW pods will be listening. An SSL certificate must be specified.
instances: The number of pods that will be started to load balance this object store. Ignored if
allNodes: Whether RGW pods should be started on all nodes. If true, a daemonset is created. If false,
instancesmust be set.
placement: The Kubernetes placement settings to determine where the RGW pods should be started in the cluster.
resources: Set resource requests/limits for the Gateway Pod(s), see Resource Requirements/Limits.
Rook provides a default
mime.types file for each Ceph object store. This file is stored in a
Kubernetes ConfigMap with the name
rook-ceph-rgw-<STORE-NAME>-mime-types. For most users, the
default file should suffice, however, the option is available to users to edit the
file in the ConfigMap as they desire. Users may have their own special file types, and particularly
security conscious users may wish to pare down the file to reduce the possibility of a file type
Rook will not overwrite an existing
mime.types ConfigMap so that user modifications will not be
destroyed. If the object store is destroyed and recreated, the ConfigMap will also be destroyed and