OpenEBS is the most widely deployed and easy to use open-source storage solution for Kubernetes.
OpenEBS is the leading open-source example of a category of cloud native storage solutions sometimes called Container Attached Storage. OpenEBS is listed as an open-source example in the CNCF Storage Landscape White Paper under the hyperconverged storage solutions.
Some key aspects that make OpenEBS different compared to other traditional storage solutions:
- Built using the micro-services architecture like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components.
- Built completely in userspace making it highly portable to run across any OS/platform.
- Completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.
- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use the LocalPV engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use Mayastor or cStor based on ZFS for resilience. Streaming applications like Kafka can use the NVMe engine Mayastor for best performance in edge environments. Across engine types, OpenEBS provides a consistent framework for high availability, snapshots, clones and manageability.
An added advantage of being a completely Kubernetes native solution is that administrators and developers can interact and manage OpenEBS using all the wonderful tooling that is available for Kubernetes like kubectl, Helm, Prometheus, Grafana, Weave Scope, etc.
Check out what users of OpenEBS have to say about their experience in OpenEBS Adoption stories.
Types of OpenEBS Storage Engines
OpenEBS is a collection Storage Engines, allowing you to pick the right storage solution for your Stateful workloads and the type of Kubernetes platform. At a high-level, OpenEBS supports two broad categories of volumes - Local and Replicated.
Local Volumes are accessible only from a single node in the cluster. Pods using Local Volume have to be scheduled on the node where volume is provisioned. Local Volumes are typically preferred for distributed workloads like Cassandra, MongoDB, Elastic, etc that are distributed in nature and have high availability built into them.
Depending on the type of storage attached to your Kubernetes worker nodes, you can select from different flavors of Dynamic Local PV - Hostpath, Device, ZFS or Rawfile.
Replicated Volumes (aka Highly Available Volumes)
Replicated Volumes as the name suggests, are those that have their data synchronously replicated to multiple nodes. Volumes can sustain node failures. The replication also can be setup across availability zones helping applications move across availability zones.
Replicated Volumes also are capable of enterprise storage features like snapshots, clone, volume expansion and so forth. Replicated Volumes are a preferred choice for Stateful workloads like Percona/MySQL, Jira, GitLab, etc.
Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from Jiva, cStor or Mayastor.
Selecting the right storage engine
See the following table for recommendation on which engine is right for your application depending on the application requirements and storage available on your Kubernetes nodes.
|Application requirements||Storage Type||OpenEBS Volumes|
|Low Latency, High Availability, Synchronous replication, Snapshots, Clones, Thin provisioning||SSDs/Cloud Volumes||OpenEBS Mayastor|
|High Availability, Synchronous replication, Snapshots, Clones, Thin provisioning||Disks/SSDs/Cloud Volumes||OpenEBS cStor|
|High Availability, Synchronous replication, Thin provisioning||hostpath or external mounted storage||OpenEBS Jiva|
|Low latency, Local PV||hostpath or external mounted storage||Dynamic Local PV - Hostpath|
|Low latency, Local PV||Disks/SSDs/Cloud Volumes||Dynamic Local PV - Device|
|Low latency, Local PV, Snapshots, Clones||Disks/SSDs/Cloud Volumes||OpenEBS Dynamic Local PV - ZFS|
OpenEBS is also developing Dynamic Local PV - Rawfile storage engines available for alpha testing.
OpenEBS has a vibrant community that can help you get started. If you have further question and want to learn more about OpenEBS, please join OpenEBS community on Kubernetes Slack. If you are already signed up, head to our discussions at #openebs channel.
Installing OpenEBS in your cluster is as simple as a few
helm commands depending on the storage engine. Here are the list of our Quickstart guides with detailed instructions for each storage engine.