This article will cover cStor pool deployment using the CSPC schema. cStor supports 2 types of pool schemes.
- SPC
- CSPC (CStorPoolCluster)
CSPC is a CSI driver implementation for OpenEBS CStor storage engine.
The current implementation supports the following for CStor Volumes:
- Provisioning and De-provisioning with ext4 filesystems
- Snapshots and clones
- Volume Expansion
Following storage day 2 operations are supported via the CSPC schema:
- Pool Expansion ( Supported in OpenEBS version >=1.2, alpha )
- Pool Deletion ( Supported in OpenEBS version >=1.2, alpha )
- Pool Scale Up ( Supported in OpenEBS version >=1.2, alpha )
- Block Device Replacement ( Supported in OpenEBS version >=1.5, alpha )
Prerequisites
Before setting up OpenEBS CStor CSI driver make sure your Kubernetes Cluster meets the following prerequisites:
- Users need to have Kubernetes version 1.14 or higher
- Users need to have OpenEBS Version 1.2 or higher installed.
- CStor CSI driver operates on the cStor Pools provisioned using the new schema called CSPC.
- iSCSI initiator utils installed on all the worker nodes
- Users need access to install RBAC components into the kube-system namespace. The OpenEBS CStor CSI driver components are installed in the kube-system namespace to allow them to be flagged as system-critical components.
- Users will need to turn on ExpandCSIVolumes and ExpandInUsePersistentVolumes feature gates on kubelets and kube-apiserver.
Installation
STEP 1
Before installing the CSPC scheme check all your Kubera pods are running by using the below command.
Note: If Kubera OnPrem and applications are being deployed in the same cluster, please make sure Kubera components, OpenEBS components, CSI operator components, and cStor operator components are installed in the namespace. In this article, all the components are installed in openebs namespace.
kubectl get pods -n openebs
Sample Output
kubectl get pods -n openebs
STEP 2
In case, the namespace used is other than `openebs`,make sure you update the namespace spec with the desired namespace (by default these are set to openebs). To edit, first get the YAML locally, for this execute:
wget https://raw.githubusercontent.com/openebs/charts/gh-pages/2.1.0/cstor-operator-2.1.0.yaml
NOTE:
- If k8s version >=1.17 and openebs version is >= 2.0.0, csi will get deployed in openebs namespace.
- If k8s version< 1.17 then, csi will get deployed in kube-system namespace.
Apply the cstor-operator.yaml by using the below commands.
kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/2.1.0/cstor-operator-2.1.0.yaml
Sample Output:
root@demo-1:~# kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/2.1.0/cstor-operator-2.1.0.yaml
namespace/openebs created
serviceaccount/openebs-maya-operator created
customresourcedefinition.apiextensions.k8s.io/cstorpoolclusters.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorpoolinstances.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumes.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumeconfigs.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumepolicies.cstor.openebs.io created
customresourcedefinition.apiextensions.k8s.io/cstorvolumereplicas.cstor.openebs.io created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-operator created
clusterrole.rbac.authorization.k8s.io/openebs-cstor-migration created
clusterrolebinding.rbac.authorization.k8s.io/openebs-cstor-migration created
deployment.apps/cspc-operator created
deployment.apps/cvc-operator created
deployment.apps/openebs-cstor-admission-server created
Verify the CSPC, CVC operator pods and openebs-cstor-admission-server are running by using the below command.
kubectl get pods -n openebs
Sample Output:
root@demo-1:~# kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
cspc-operator-9996dc6dd-2j8kr 1/1 Running 0 5m27s
cvc-operator-76796bb45d-cf8hq 1/1 Running 0 5m27s
openebs-cstor-admission-server-5b6bdb48d4-gx7p2 1/1 Running 0 5m27s
Verify the CSI operator pods are running by using the below command.
kubectl get pods -n kube-system
Sample Output:
root@demo-1:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-264mg 1/1 Running 0 13d
coredns-66bff467f8-7n5nn 1/1 Running 0 13d
etcd-demo-1 1/1 Running 0 13d
kube-apiserver-demo-1 1/1 Running 0 13d
kube-controller-manager-demo-1 1/1 Running 0 13d
kube-proxy-2wf4m 1/1 Running 0 13d
kube-proxy-c6n4q 1/1 Running 0 13d
kube-proxy-dhfhs 1/1 Running 0 13d
kube-proxy-fskcv 1/1 Running 0 13d
kube-router-nk2fj 1/1 Running 0 13d
kube-router-pdnqk 1/1 Running 0 13d
kube-router-q9lts 1/1 Running 0 13d
kube-router-v74b7 1/1 Running 0 13d
kube-scheduler-demo-1 1/1 Running 0 13d
openebs-cstor-csi-controller-0 7/7 Running 0 32s
openebs-cstor-csi-node-b82tr 2/2 Running 0 31s
openebs-cstor-csi-node-cn7f2 2/2 Running 0 31s
openebs-cstor-csi-node-rbm95 2/2 Running 0 31s
Step 3
Pool Provisioning
Users need to specify cStor pool intent in a CSPC YAML to provision cStor pools on nodes. In this article, the user is going to provision 3 stripe cStor pools. Let us prepare a CSPC YAML now.
Following command list all block devices which represent the user’s attached disks.
kubectl get bd -n kubera
Sample Output
root@demo-1:~# kubectl get bd -n kubera
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-0c00765fcfbc63e79433c5098a4bf3a2 demo-3 53687091200 Unclaimed Active 46m
blockdevice-56352dfc632c90d8ae958b41d3c35657 demo-2 53687091200 Unclaimed Active 45m
blockdevice-e2979378860ad81255413cc9721095ff demo-4 53687091200 Unclaimed Active 46m
blockdevice-ef3ae98c831d5ff6a4842ccc055411ae demo-2 107374182400 Unclaimed Active 45m
Now the user has to pick 1 block device from each node to form a CSPC YAML. User can pick multiple block devices from one node. User has to update the hostname with the respective blockDeviceName carefully.
Sample CSPC yaml:
apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
name: cspc-stripe
namespace: openebs
spec:
pools:
- nodeSelector:
kubernetes.io/hostname: "demo-2"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-56352dfc632c90d8ae958b41d3c35657"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "demo-3"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-0c00765fcfbc63e79433c5098a4bf3a2"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "demo-4"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-e2979378860ad81255413cc9721095ff"
poolConfig:
dataRaidGroupType: "stripe"
Now apply the CSPC yaml by using the below command.
kubectl apply -f cspc.yaml
Sample Output:
root@demo-1:~# kubectl apply -f cspc.yaml
cstorpoolcluster.cstor.openebs.io/cspc-stripe created
Verify the CSPC and it’s instance status by running the below commands and their output.
DESIREDINSTANCES value should be equal to HEALTHYINSTANCES PROVISIONEDINSTANCES values.
root@demo-1:~# kubectl get cspc -n openebs
NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE
cspc-stripe 3 3 3 3m26s
root@demo-1:~# kubectl get cspi -n openebs
NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE
cspc-stripe-6sxx demo-4 230k 48200M 48200230k ONLINE 3m31s
cspc-stripe-bq96 demo-3 614k 48200M 48200614k ONLINE 3m31s
cspc-stripe-hjrp demo-2 230k 48200M 48200230k ONLINE 3m31s
Step 4
Provision a storageclass on CSPC pool. Use the below content for creating a csi-storageclass.yaml for the same. Mention the CSPC name on cstorPoolCluster: cspc-stripe value and the replica count as per requirement.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-csi-cstor
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
parameters:
cas-type: cstor
replicaCount: "3"
cstorPoolCluster: cspc-stripe
Now apply the above yaml by using the below command.
kubectl apply -f csi-storageclass.yaml
Sample Output:
root@demo-1:~# kubectl apply -f csi-storageclass.yaml
storageclass.storage.k8s.io/openebs-csi-cstor created
Verify the storageclass.
Sample output:
root@demo-1:~# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-csi-cstor cstor.csi.openebs.io Delete Immediate true 4m50s
openebs-device openebs.io/local Delete WaitForFirstConsumer false 35d
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 35d
openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 35d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 35d
Now the storage class can be used to deploy an application or create PVC.
For example here a sample Minio application on the newly provision storage class will be deployed.
Sample Minio application yaml. In the PVC spec mention the storage class name in the storageClassName: openebs-csi-cstor value.
apiVersion: apps/v1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
labels:
app: minio
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage1
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim1
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio
args:
- server
- /storage1
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage1 # must match the volume name, above
mountPath: "/storage1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim1
labels:
app: minio-storage-claim1
spec:
storageClassName: openebs-csi-cstor
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
labels:
app: minio
spec:
ports:
- port: 9000
nodePort: 32701
protocol: TCP
selector:
app: minio
sessionAffinity: None
type: NodePort
Use the above yaml to create a minio_application.yaml and apply the same to deploy the application.
Apply the minio_application.yaml by using the below command.
kubectl apply -f minio_application.yaml
Verify Minio pod is running.
root@demo-1:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
minio-deployment-76956d859f-sh96z 1/1 Running 0 2m32s
Verify the Minio service.
root@demo-1:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d
minio-service NodePort 10.101.229.145 <none> 9000:32701/TCP 2m38s
Now the application can be accessed through NodePort using node IP.
Here it will be one of the node ip and Minio service NodePort. http://10.43.3.56:32701/minio/login
UI example: