PLEASE NOTE: This document applies to v1.8 version and not to the latest stable release v1.9
Ceph CSI Drivers
There are two CSI drivers integrated with Rook that will enable different scenarios:
- RBD: This driver is optimized for RWO pod access where only one pod may access the storage
- CephFS: This driver allows for RWX with one or more pods accessing the same storage
The drivers are enabled automatically with the Rook operator. They will be started in the same namespace as the operator when the first CephCluster CR is created.
For documentation on consuming the storage:
The supported Ceph CSI version is 3.3.0 or greater with Rook. Refer to ceph csi releases for more information.
Both drivers also support the creation of static PV and static PVC from existing RBD image/CephFS volume. Refer to static PVC for more information.
Configure CSI Drivers in non-default namespace
If you’ve deployed the Rook operator in a namespace other than “rook-ceph”, change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace “my-namespace” the provisioner value should be “my-namespace.rbd.csi.ceph.com”. The same provisioner name needs to be set in both the storageclass and snapshotclass.
All CSI pods are deployed with a sidecar container that provides a prometheus metric for tracking if the CSI plugin is alive and running.
These metrics are meant to be collected by prometheus but can be accesses through a GET request to a specific node ip.
curl -X get http://[pod ip]:[liveness-port][liveness-path] 2>/dev/null | grep csi
the expected output should be
curl -X GET http://10.109.65.142:9080/metrics 2>/dev/null | grep csi
# HELP csi_liveness Liveness Probe # TYPE csi_liveness gauge csi_liveness 1
Check the monitoring doc to see how to integrate CSI liveness and grpc metrics into ceph monitoring.
Dynamically Expand Volume
- For filesystem resize to be supported for your Kubernetes cluster, the
kubernetes version running in your cluster should be >= v1.15 and for block
volume resize support the Kubernetes version should be >= v1.16. Also,
ExpandCSIVolumesfeature gate has to be enabled for the volume resize functionality to work.
To expand the PVC the controlling StorageClass must have
csi.storage.k8s.io/controller-expand-secret-namespace values set in
storageclass. Now expand the PVC by editing the PVC
pvc.spec.resource.requests.storage to a higher values than the current size.
Once PVC is expanded on backend and same is reflected size is reflected on
application mountpoint, the status capacity
PVC will be updated to new size.
To support RBD Mirroring, the Volume Replication Operator will be started in the RBD provisioner pod. The Volume Replication Operator is a kubernetes operator that provides common and reusable APIs for storage disaster recovery. It is based on csi-addons/spec specification and can be used by any storage provider. It follows the controller pattern and provides extended APIs for storage disaster recovery. The extended APIs are provided via Custom Resource Definitions (CRDs).
Kubernetes version 1.21 or greater is required.
Enable volume replication
- Install the volume replication CRDs:
kubectl create -f https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplications.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplicationclasses.yaml
- Enable the volume replication controller:
- For Helm deployments see the csi.volumeReplication.enabled setting.
- For non-Helm deployments set
CSI_ENABLE_VOLUME_REPLICATION: "true"in operator.yaml
Ephemeral volume support
The generic ephemeral volume feature adds support for specifying PVCs in the
volumes field to indicate a user would like to create a Volume as part of the pod spec.
This feature requires the GenericEphemeralVolume feature gate to be enabled.
kind: Pod apiVersion: v1 ... volumes: - name: mypvc ephemeral: volumeClaimTemplate: spec: accessModes: ["ReadWriteOnce"] storageClassName: "rook-ceph-block" resources: requests: storage: 1Gi
A volume claim template is defined inside the pod spec which refers to a volume provisioned and used by the pod with its lifecycle. The volumes are provisioned when pod get spawned and destroyed at time of pod delete.
The CSI-Addons Controller handles the requests from users to initiate an operation. Users create a CR that the controller inspects, and forwards a request to one or more CSI-Addons side-cars for execution.
Deploying the controller
Users can deploy the controller by running the following commands:
kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.3.0/deploy/controller/crds.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.3.0/deploy/controller/rbac.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.3.0/deploy/controller/setup-controller.yaml
This creates the required crds and configure permissions.
Enable the CSI-Addons Sidecar
To use the features provided by the CSI-Addons, the
containers need to be deployed in the RBD provisioner and nodeplugin pods,
which are not enabled by default.
Execute the following command in the cluster to enable the CSI-Addons sidecar:
- Update the
rook-ceph-operator-configconfigmap and patch the following configurations
kubectl patch cm rook-ceph-operator-config -nrook-ceph -p $'data:\n "CSI_ENABLE_CSIADDONS": "true"'
- After enabling
CSI_ENABLE_CSIADDONSin the configmap, a new sidecar container with name
csi-addonsshould now start automatically in the RBD CSI provisioner and nodeplugin pods.
NOTE: Make sure the version of ceph-csi used is v3.5.0+
CSI-Addons supports the following operations: