Skip to content

Example Configurations

Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. While several examples are provided to simplify storage setup, settings are available to optimize various production environments.

See the example yaml files folder for all the rook/ceph setup example spec files.

Common Resources

The first step to deploy Rook is to create the CRDs and other common resources. The configuration for these resources will be the same for most deployments. The crds.yaml and common.yaml sets these resources up.

kubectl create -f crds.yaml -f common.yaml

The examples all assume the operator and all Ceph daemons will be started in the same namespace. If deploying the operator in a separate namespace, see the comments throughout common.yaml.

Operator

After the common resources are created, the next step is to create the Operator deployment. Several spec file examples are provided in this directory:

  • operator.yaml: The most common settings for production deployments
    • kubectl create -f operator.yaml
  • operator-openshift.yaml: Includes all of the operator settings for running a basic Rook cluster in an OpenShift environment. You will also want to review the OpenShift Prerequisites to confirm the settings.
    • oc create -f operator-openshift.yaml

Settings for the operator are configured through environment variables on the operator deployment. The individual settings are documented in operator.yaml.

Cluster CRD

Now that the operator is running, create the Ceph storage cluster with the CephCluster CR. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure the cluster. These examples represent several different ways to configure the storage.

See the Cluster CRD topic for more details and more examples for the settings.

Setting up consumable storage

Now we are ready to setup Block, Shared Filesystem or Object storage in the Rook cluster. These storage types are respectively created with the CephBlockPool, CephFilesystem and CephObjectStore CRs.

Block Devices

Ceph provides raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in application pods. The storage class is defined with a Ceph pool which defines the level of data redundancy in Ceph:

  • storageclass.yaml: This example illustrates replication of 3 for production scenarios and requires at least three worker nodes. Data is replicated on three different kubernetes worker nodes. Intermittent or long-lasting single node failures will not result in data unavailability or loss.
  • storageclass-ec.yaml: Configures erasure coding for data durability rather than replication. Ceph's erasure coding is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three worker nodes. See the Erasure coding documentation.
  • storageclass-test.yaml: Replication of 1 for test scenarios. Requires only a single node. Do not use this for production applications. A single node failure can result in full data loss.

The block storage classes are found in the examples directory:

  • csi/rbd: the CSI driver examples for block devices

See the CephBlockPool CRD topic for more block storage settings.

Shared Filesystem

Ceph filesystem (CephFS) allows the user to mount a shared posix-compliant folder into one or more application pods. This storage is similar to NFS shared storage or CIFS shared folders, as explained here.

Shared Filesystem storage contains configurable pools for different scenarios:

  • filesystem.yaml: Replication of 3 for production scenarios. Requires at least three worker nodes.
  • filesystem-ec.yaml: Erasure coding for production scenarios. Requires at least three worker nodes.
  • filesystem-test.yaml: Replication of 1 for test scenarios. Requires only a single node.

Dynamic provisioning is possible with the CSI driver. The storage class for shared filesystems is found in the csi/cephfs directory.

See the Shared Filesystem CRD topic for more details on the settings.

Object Storage

Ceph supports storing blobs of data called objects that support HTTP(s)-type get/put/post and delete semantics. This storage is similar to AWS S3 storage, for example.

Object storage contains multiple pools that can be configured for different scenarios:

  • object.yaml: Replication of 3 for production scenarios. Requires at least three worker nodes.
  • object-openshift.yaml: Replication of 3 with rgw in a port range valid for OpenShift. Requires at least three worker nodes.
  • object-ec.yaml: Erasure coding rather than replication for production scenarios. Requires at least three worker nodes.
  • object-test.yaml: Replication of 1 for test scenarios. Requires only a single node.

See the Object Store CRD topic for more details on the settings.

Object Storage User

  • object-user.yaml: Creates a simple object storage user and generates credentials for the S3 API

Object Storage Buckets

The Ceph operator also runs an object store bucket provisioner which can grant access to existing buckets or dynamically provision new buckets.

  • object-bucket-claim-retain.yaml Creates a request for a new bucket by referencing a StorageClass which saves the bucket when the initiating OBC is deleted.
  • object-bucket-claim-delete.yaml Creates a request for a new bucket by referencing a StorageClass which deletes the bucket when the initiating OBC is deleted.
  • storageclass-bucket-retain.yaml Creates a new StorageClass which defines the Ceph Object Store and retains the bucket after the initiating OBC is deleted.
  • storageclass-bucket-delete.yaml Creates a new StorageClass which defines the Ceph Object Store and deletes the bucket after the initiating OBC is deleted.