Day2 operations on OpenEBS are broadly categorised as :
- Taking snapshots/clones
- Backup and Restore
- Volume size increase
- Add disks to existing pool instance
- Adding new pool instances to the current pool
- Creating new storage classes
- Upgrading OpenEBS
- Upgrading stateful applications or Kubernetes
OpenEBS snapshots and clones
An OpenEBS snapshot is a set of reference markers for data at a particular point in time. A snapshot acts as a detailed table of contents, with accessible copies of data that user can roll back to. Snapshots in OpenEBS are instantaneous and are managed through
During installation of OpenEBS, a snapshot-controller and a snapshot-provisioner are setup which assist in taking the snapshots. During the snapshot creation, snapshot-controller creates
VolumeSnapshotData custom resources. A snapshot-provisioner is used to restore a snapshot as a new Persistent Volume(PV) via dynamic provisioning.
For managing snapshots with Jiva, refer to Jiva user guide
Creating a cStor Snapshot
Pre-requisites: A sample YAML specification and the pvc name
Copy the following YAML specification into a file called snapshot.yaml or download it from here
apiVersion: volumesnapshot.external-storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-cstor-volume namespace: default spec: persistentVolumeClaimName: cstor-vol1-claim
Edit the YAML file to update
Run the following command to create snapshot
kubectl apply -f snapshot-openebs-pvc.yaml
This command creates a snapshot of the cStor volume and two new CRDs. To list the snapshots, use the following command
kubectl get volumesnapshot kubectl get volumesnapshotdata
Note: All cStor snapshots are created in the
Cloning a cStor Snapshot
Once the snapshot is created, restoration from a snapshot or cloning the snapshot is done through a two step process. First create a PVC that refers to the snapshot and then use the PVC to create a new PV. This PVC should refer to a storage class called
Copy the following YAML specification into a file called snapshot_claim.yaml or download it from here
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vol-claim-cstor-snapshot namespace: default annotations: snapshot.alpha.kubernetes.io/snapshot: snapshot-cstor-volume spec: storageClassName: openebs-snapshot-promoter accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 4G
Edit the YAML file to update
- name of the pvc
- the annotation
- size of the volume being cloned or restored
Note: Size and namespace should be same as the original volume from which the snapshot was created
- Run the following command to create a PVC
kubectl apply -f snapshot_claim.yaml
- Get the details of newly created PVC for the snapshot
kubectl get pvc -n <namespace>
- Mount the above PVC in an application YAML to browse the data from the clone
Deleting a cStor Snapshot
Delete the snapshot using the kubectl command and the same YAML specification that was to create the snapshot
kubectl delete -f snapshot.yaml -n <namespace>
This will not affect any
PersistentVolumes that were already provisioned using the snapshot. On the other hand, deleting any
PersistentVolumes that were provisioned using the snapshot will not delete the snapshot from the OpenEBS backend.
Expanding the size of a pool instance
A pool instance is local to a node. A pool instance can be started with as small as one disk (in
striped mode) or two disks (in
mirrored) mode. cStor pool instances support thin provisioning of data, which means that provisioning of any volume size will be successful from a given cstorPool config.
However, as the actual used capacity of the pool is utilized, more disks need to be added. In 0.8.0, the feature to add more disks to pool instance is not supported. This feature is under active development. See roadmap for more details.
Expanding the pool to more nodes
When a new node is added, you may want to expand the cStor pool config to extend to that node so that a new pool instance is created on the new node. Typical procedure would be to add new disk CRs to
kubectl apply the
<castor-pool-config.yaml>. This feature is under active development. See roadmap for more details.
Expanding the size of a volume
OpenEBS control plane does not support increasing the size of volume seamlessly. Increasing the size of a provisioned volume requires support from Kubernetes' kubelet as the existing connection has to be remounted to reflect the new volume size. This can also be tackled with the new CSI plugin where the responsibility of the mount, unmount and remount actions will be held with the vendor CSI plugin rather than the kubelet itself.
OpenEBS team is working on both the CSI plugin as well as the feature to resize the provisioned volume when the PVC is patched for new volume size. See roadmap for more details.