Installing OpenEBS
OpenEBS is tested on various platforms. Refer to the platform versions and associated special instructions [here].(/docs/next/supportedplatforms.html)
On an existing Kubernetes cluster, as a cluster administrator, you can install latest version of OpenEBS in the following two ways.
- Using Stable helm charts
- Using OpenEBS operator through kubectl
The latest OpenEBS version 0.7.2 installation steps for both methods are explained below.
Install OpenEBS using Helm Charts
Setup Helm and RBAC
Setup Helm
You should have configured helm on your Kubernetes cluster as a prerequisite.
Setup RBAC for Tiller before Installing OpenEBS Chart
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
kubectl -n kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'
Install OpenEBS using Stable Helm Charts
You can install OpenEBS using helm charts using Stable Helm Charts which will use Kubernetes stable helm charts. Install OpenEBS using the following commands in the openebs namespace.
Note: Ensure that you have met the prerequisites before installation.
helm repo update
helm install --namespace openebs --name openebs stable/openebs
OpenEBS control plane pods are now created. CAS Template,default Storage Pool,and default Storage Classes are created after executing the above command. Now,select the storage engine to provision OpenEBS volume from here.
Default Values for Helm Chart Parameters
The following table lists the configurable parameters of the OpenEBS chart and their default values.
Parameter | Description | Default |
---|---|---|
rbac.create | Enable RBAC Resources | true |
image.pullPolicy | Container pull policy | IfNotPresent |
apiserver.image | Docker Image for API Server | openebs/m-apiserver |
apiserver.imageTag | Docker Image Tag for API Server | 0.7.2 |
apiserver.replicas | Number of API Server Replicas | 1 |
provisioner.image | Docker Image for Provisioner | openebs/openebs-k8s-provisioner |
provisioner.imageTag | Docker Image Tag for Provisioner | 0.7.2 |
provisioner.replicas | Number of Provisioner Replicas | 1 |
snapshotOperator.provisioner.image | Docker Image for Snapshot Provisioner | openebs/snapshot-provisioner |
snapshotOperator.provisioner.imageTag | Docker Image Tag for Snapshot Provisioner | 0.7.2 |
snapshotOperator.controller.image | Docker Image for Snapshot Controller | openebs/snapshot-controller |
snapshotOperator.controller.imageTag | Docker Image Tag for Snapshot Controller | 0.7.2 |
snapshotOperator.replicas | Number of Snapshot Operator Replicas | 1 |
ndm.image | Docker Image for Node Disk Manager | openebs/openebs/node-disk-manager-amd64 |
ndm.imageTag | Docker Image Tag for Node Disk Manager | v0.2.0 |
ndm.sparse.enabled | Create Sparse files and cStor Sparse Pool | true |
ndm.sparse.path | Directory where Sparse files are created | /var/openebs/sparse |
ndm.sparse.size | Size of the sparse file in bytes | 10737418240 |
ndm.sparse.count | Number of sparse files to be created | 1 |
ndm.sparse.filters.excludeVendors | Exclude devices with specified vendor | CLOUDBYT,OpenEBS |
ndm.sparse.filters.excludePaths | Exclude devices with specified path patterns | loop,fd0,sr0,/dev/ram,/dev/dm- |
jiva.image | Docker Image for Jiva | openebs/jiva |
jiva.imageTag | Docker Image Tag for Jiva | 0.7.2 |
jiva.replicas | Number of Jiva Replicas | 3 |
cstor.pool.image | Docker Image for cStor Pool | openebs/cstor-pool |
cstor.pool.imageTag | Docker Image Tag for cStor Pool | 0.7.2 |
cstor.poolMgmt.image | Docker Image for cStor Pool Management | openebs/cstor-pool-mgmt |
cstor.poolMgmt.imageTag | Docker Image Tag for cStor Pool Management | 0.7.2 |
cstor.target.image | Docker Image for cStor Target | openebs/cstor-target |
cstor.target.imageTag | Docker Image Tag for cStor Target | 0.7.2 |
cstor.volumeMgmt.image | Docker Image for cStor Volume Management | openebs/cstor-volume-mgmt |
cstor.volumeMgmt.imageTag | Docker Image Tag for cStor Volume Management | 0.7.2 |
policies.monitoring.image | Docker Image for Prometheus Exporter | openebs/m-exporter |
policies.monitoring.imageTag | Docker Image Tag for Prometheus Exporter | 0.7.2 |
Specify each parameter using the --set key=value
argument to helm install
.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
helm install -f values.yaml --namespace openebs --name openebs stable/openebs
You can get default values.yaml from here.
Install OpenEBS using kubectl
You can install OpenEBS cluster by running the following command.
Note: Ensure that you have met the prerequisites before installation.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.2.yaml
OpenEBS control plane pods are created under “openebs” namespace. CAS Template,default Storage Pool and default Storage Classes are created after executing the above command.Now select your storage to provision OpenEBS volume from here.
Select Your Storage Engine
You can now choose the storage engine to provision Jiva or cStor volumes. For more information about OpenEBS storage engines, see Jiva and cStor.
As a cluster admin, you can provision jiva or cStor based on your requirements. For more information about provisioning them, see provisioning jiva and provisioning cStor.
Once you complete provisioning the volumes, you can run the stateful application workloads. Some sample YAML files for stateful workloads using OpenEBS are provided in the openebs/k8s/demo.