Ceph filesystem mirroring is a process of asynchronous replication of snapshots to a remote CephFS file system. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies.
apiVersion:ceph.rook.io/v1kind:CephFilesystemmetadata:name:myfsnamespace:rook-cephspec:metadataPool:failureDomain:hostreplicated:size:3dataPools:-name:replicatedfailureDomain:hostreplicated:size:3preserveFilesystemOnDelete:truemetadataServer:activeCount:1activeStandby:truemirroring:enabled:true# list of Kubernetes Secrets containing the peer token# for more details see: https://docs.ceph.com/en/latest/dev/cephfs-mirroring/#bootstrap-peers# Add the secret name if it already exists else specify the empty list here.peers:secretNames:#- secondary-cluster-peer# specify the schedule(s) on which snapshots should be taken# see the official syntax here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-schedulessnapshotSchedules:-path:/interval:24h# daily snapshots# The startTime should be mentioned in the format YYYY-MM-DDTHH:MM:SS# If startTime is not specified, then by default the start time is considered as midnight UTC.# see usage here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#usage# startTime: 2022-07-15T11:55:00# manage retention policies# see syntax duration here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-retention-policiessnapshotRetention:-path:/duration:"h24"
Once mirroring is enabled, Rook will by default create its own bootstrap peer token so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephFilesystem CR: