PLEASE NOTE: This document applies to an unreleased version of Rook. It is strongly recommended that you only use official releases of Rook, as unreleased versions are subject to changes and incompatibilities that will not be supported in the official releases.
If you are using an official release version of Rook, you should refer to the documentation for your specific version.Documentation for other releases can be found by using the version selector in the bottom left of any doc page.
To make sure you have a Kubernetes cluster that is ready for
Rook, review the general Rook Prerequisites.
In order to configure the Ceph storage cluster, at least one of these local storage options are required:
- Raw devices (no partitions or formatted filesystems)
- Raw partitions (no formatted filesystem)
- PVs available from a storage class in
Ceph OSDs have a dependency on LVM in the following scenarios:
- OSDs are created on raw devices or partitions
- If encryption is enabled (
encryptedDevice: truein the cluster CR)
metadatadevice is specified
LVM is not required for OSDs in these scenarios:
- Creating OSDs on PVCs using the
If LVM is required for your scenario, LVM needs to be available on the hosts where OSDs will be running.
Some Linux distributions do not ship with the
lvm2 package. This package is required on all storage nodes in your k8s cluster to run Ceph OSDs.
Without this package even though Rook will be able to successfully create the Ceph OSDs, when a node is rebooted the OSD pods
running on the restarted node will fail to start. Please install LVM using your Linux distribution’s package manager. For example:
sudo yum install -y lvm2
sudo apt-get install -y lvm2
- Since version 1.5.0 LVM is supported
- Logical volumes will not be activated during the boot process. You need to add an runcmd command for that.
runcmd: - [ vgchange, -ay ]
Ceph Flexvolume Configuration
NOTE This configuration is only needed when using the FlexVolume driver (required for Kubernetes 1.12 or earlier). The Ceph-CSI RBD driver or the Ceph-CSI CephFS driver are recommended for Kubernetes 1.13 and newer, making FlexVolume configuration redundant.
If you want to configure volumes with the Flex driver instead of CSI, the Rook agent requires setup as a Flex volume plugin to manage the storage attachments in your cluster. See the Flex Volume Configuration topic to configure your Kubernetes deployment to load the Rook volume plugin.
Extra agent mounts
On certain distributions it may be necessary to mount additional directories into the agent container. That is what the environment variable
AGENT_MOUNTS is for. Also see the documentation in helm-operator on the parameter
agent.mounts. The format of the variable content should be
Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all distributions. For example, the GKE Container-Optimised OS (COS) does not have RBD.
You can test your Kubernetes nodes by running
If it says ‘not found’, you may have to rebuild your kernel
or choose a different Linux distribution.
If you will be creating volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If you have a kernel version less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels.
Kernel modules directory configuration
Normally, on Linux, kernel modules can be found in
/lib/modules. However, there are some distributions that put them elsewhere. In that case the environment variable
LIB_MODULES_DIR_PATH can be used to override the default. Also see the documentation in helm-operator on the parameter
agent.libModulesDirPath. One notable distribution where this setting is useful would be NixOS.