This article focuses on the upgrade of k8s version on an OpenEBS cluster running on DigitalOcean cloud platform. The cluster taken into consideration is having applications running on cStor with volumes(blockdevice) attached to nodes. Detailed guided steps on how to deploy OpenEBS on DigitalOcean are documented in the article OpenEBS on DigitalOcean marketplace.
DigitalOcean has various options for upgrading the k8s version. Users can follow DigitalOcean's official documentation on upgrading the Kubernetes version.
Here we have used the CLI based upgrade for the cluster. Once the user starts the upgrade it can take some time to upgrade and Digital Ocean has suggested having at least 4hrs of downtime for Kubernetes upgrade activity. In this example, the setup has 3 nodes. Each node is attached with a volumes (blockdevice).
Note: blockdevice refers to the volume attached to the nodes in DigitalOcean terminology.
In this example, Kubernetes version is upgraded from 1.15 to 1.16. Once the upgrade the activity is completed, DigitalOcean will spin up new nodes with the updated Kubernetes version. But the volumes (blockdevice) on which cStor pool was deployed will still be attached to the old nodes which don't exist anymore. So, the user has to attach the volumes to the new nodes. For example, if the user had 3 nodes earlier with a volume attached to each node. Then with the new nodes, the user should attach existing volumes to each node. The volumes can be attached to the nodes in any order. Once the volumes are attached to the nodes you have to follow some manual steps to get the cStor pool online.
In this example, OpenEBS is installed in openebs namespace. To list the pods running on OpenEBS namespace execute:
kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
cstor-disk-pool-ep0q-7f5cb4dc7d-6x5hw 0/3 Pending 0 24m
cstor-disk-pool-j1bz-54fd7d866b-7tsgp 0/3 Pending 0 18m
cstor-disk-pool-ufo7-6b8cc566b4-dglf9 0/3 Pending 0 29m
openebs-admission-server-7c7f87c96f-4svhh 1/1 Running 0 24m
openebs-apiserver-bf55cd997-c84mg 1/1 Running 0 18m
openebs-localpv-provisioner-6979bcf5b5-xnsml 1/1 Running 0 18m
openebs-ndm-bcz8k 1/1 Running 0 18m
openebs-ndm-operator-7d86667447-dmqkv 1/1 Running 0 24m
openebs-ndm-rh8fs 1/1 Running 0 14m
openebs-ndm-t952t 1/1 Running 0 24m
openebs-provisioner-7cc5cbb998-k89lv 1/1 Running 0 18m
openebs-snapshot-operator-9477b744d-x5v4d 2/2 Running 0 18m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-target-696495cf94njqqm 3/3 Running 0 24m
In this example, application pod is running on default namespace. We have a Percona application deployed on the cluster. It can be seen that cstor-disk-pool pods are in pending state after the Kubernetes upgrade activity.
To list the application pods execute:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
percona-767db88d9d-ff77g 0/1 ContainerCreating 0 20m
The application pod in this example is in ContainerCreating state and will come to running state once cStor pool pods are running. For that, user should follow the below steps.
The changes that have to be done are highlighted in red color inside the code snippets.
STEP 1:
List the CSPs.
To list the CSPs execute:
kubectl get csp
Output:
NAME ALLOCATED FREE CAPACITY STATUS TYPE AGE
cstor-disk-pool-ep0q 15.7M 49.7G 49.8G Healthy striped 80m
cstor-disk-pool-j1bz 15.2M 49.7G 49.8G Healthy striped 80m
cstor-disk-pool-ufo7 15.7M 49.7G 49.8G Healthy striped 80m
To edit, execute: Now, edit one of the CSP.
kubectl get csp <cstor_disk_pool_name> -o yaml
Suppose we want to edit cstor-disk-pool-ep0q, the command will be as follows:
kubectl get csp cstor-disk-pool-ep0q -o yaml
Note down the blockdevice on which CSP running.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: CStorPool
metadata:
annotations:
openebs.io/csp-lease: '{"holder":"openebs/cstor-disk-pool-ep0q-6db9cc9877-kgxzh","leaderTransition":2}'
creationTimestamp: "2019-11-29T11:35:14Z"
generation: 8696
labels:
kubernetes.io/hostname: chandan-pool-6knd
openebs.io/cas-template-name: cstor-pool-create-default-1.4.0
openebs.io/cas-type: cstor
openebs.io/storage-pool-claim: cstor-disk-pool
openebs.io/version: 1.4.0
name: cstor-disk-pool-ep0q
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: StoragePoolClaim
name: cstor-disk-pool
uid: 95204474-8772-4318-b97a-587d534dabc8
resourceVersion: "887824"
selfLink: /apis/openebs.io/v1alpha1/cstorpools/cstor-disk-pool-ep0q
uid: 4ee586b4-014d-474f-b720-8ba9cbec62a3
spec:
group:
- blockDevice:
- deviceID: /dev/disk/by-id/scsi-0DO_Volume_volume-blr1-02
inUseByPool: true
name: blockdevice-f40225b528210eaa55193ac0fa0213e3
poolSpec:
cacheFile: /tmp/pool1.cache
overProvisioning: false
poolType: striped
status:
capacity:
free: 49.7G
total: 49.8G
used: 75.4M
lastTransitionTime: "2019-11-29T13:02:02Z"
lastUpdateTime: "2019-12-02T13:10:03Z"
phase: Healthy
versionDetails:
autoUpgrade: false
desired: 1.4.0
status:
current: 1.4.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
STEP 2:
Now get the details of the blockDevice. Note down the hostname from the blockDevice details. We have to update the hostname in CSP, CVR and Deployment.
To get the blockdevice details, execute the following command:
kubectl get bd -n openebs -o yaml blockdevice-cfff6a69dcc47bf878bf2095d6442c2f
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
creationTimestamp: "2019-11-29T07:33:37Z"
generation: 2
labels:
kubernetes.io/hostname: chandan-pool-6kd1
ndm.io/blockdevice-type: blockdevice
ndm.io/managed: "true"
name: blockdevice-cfff6a69dcc47bf878bf2095d6442c2f
namespace: openebs
resourceVersion: "53145"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/blockdevices/blockdevice-cfff6a69dcc47bf878bf2095d6442c2f
uid: d56e1f8b-ffa8-473d-977f-482a70ae8b9b
spec:
capacity:
logicalSectorSize: 512
physicalSectorSize: 0
storage: 53687091200
claimRef:
apiVersion: openebs.io/v1alpha1
kind: BlockDeviceClaim
name: bdc-d56e1f8b-ffa8-473d-977f-482a70ae8b9b
namespace: openebs
resourceVersion: "53144"
uid: 882f5a3f-928e-4313-b015-771f72a865ec
details:
compliance: ""
deviceType: ""
firmwareRevision: ""
model: Volume
serial: volume-blr1-01
vendor: DO
devlinks:
- kind: by-id
links:
- /dev/disk/by-id/scsi-0DO_Volume_volume-blr1-01
- kind: by-path
links:
- /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1
filesystem: {}
nodeAttributes:
nodeName: chandan-pool-6knv
partitioned: "No"
path: /dev/sda
status:
claimState: Claimed
state: Active
STEP 3
Now, edit the same CSP again and update the hostname collected from the blockdevice CR.
To edit, execute:
kubectl edit csp cstor-disk-pool-ep0q
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: CStorPool
metadata:
annotations:
openebs.io/csp-lease: '{"holder":"openebs/cstor-disk-pool-ep0q-6db9cc9877-kgxzh","leaderTransition":2}'
creationTimestamp: "2019-11-29T11:35:14Z"
generation: 8696
labels:
kubernetes.io/hostname: chandan-pool-6kd1
openebs.io/cas-template-name: cstor-pool-create-default-1.4.0
openebs.io/cas-type: cstor
openebs.io/storage-pool-claim: cstor-disk-pool
openebs.io/version: 1.4.0
name: cstor-disk-pool-ep0q
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: StoragePoolClaim
name: cstor-disk-pool
uid: 95204474-8772-4318-b97a-587d534dabc8
resourceVersion: "887824"
selfLink: /apis/openebs.io/v1alpha1/cstorpools/cstor-disk-pool-ep0q
uid: 4ee586b4-014d-474f-b720-8ba9cbec62a3
spec:
group:
- blockDevice:
- deviceID: /dev/disk/by-id/scsi-0DO_Volume_volume-blr1-02
inUseByPool: true
name: blockdevice-f40225b528210eaa55193ac0fa0213e3
poolSpec:
cacheFile: /tmp/pool1.cache
overProvisioning: false
poolType: striped
status:
capacity:
free: 49.7G
total: 49.8G
used: 75.4M
lastTransitionTime: "2019-11-29T13:02:02Z"
lastUpdateTime: "2019-12-02T13:10:03Z"
phase: Healthy
versionDetails:
autoUpgrade: false
desired: 1.4.0
status:
current: 1.4.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
STEP 4:
Next, edit the CVR associated with the cStor pool for which the user updated the hostname in step 3 and update the hostname.
To list the CVRs execute:
kubectl get cvr -n openebs
Output:
NAME USED ALLOCATED STATUS AGE
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ep0q 54.0M 15.3M Healthy 81m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-j1bz 52.6M 14.8M Degraded 81m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ufo7 54.0M 15.3M Healthy 81m
The CVR associated to the cStor pool can be identified from the name itself. In this example, corresponding CVR to cStor pool cstor-disk-pool-ep0q is pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ep0q.
kubectl edit cvr <CVR_name> -n openebs
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: CStorVolumeReplica
metadata:
annotations:
cstorpool.openebs.io/hostname: chandan-pool-6kd1
isRestoreVol: "false"
openebs.io/storage-class-ref: |
name: cstor-sc-do
resourceVersion: 53482
creationTimestamp: "2019-11-29T11:37:19Z"
finalizers:
- cstorvolumereplica.openebs.io/finalizer
generation: 8763
labels:
cstorpool.openebs.io/name: cstor-disk-pool-ep0q
cstorpool.openebs.io/uid: 4ee586b4-014d-474f-b720-8ba9cbec62a3
cstorvolume.openebs.io/name: pvc-369a1c55-dabc-4bab-9037-925f2d1273e5
openebs.io/cas-template-name: cstor-volume-create-default-1.4.0
openebs.io/persistent-volume: pvc-369a1c55-dabc-4bab-9037-925f2d1273e5
openebs.io/version: 1.4.0
name: pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ep0q
namespace: openebs
resourceVersion: "894232"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/cstorvolumereplicas/pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ep0q
uid: 1077257f-bb4b-4a3f-8b17-b6b7f4f57e15
spec:
capacity: 5G
replicaid: 05C64E83BBBA22D5E59B542637B192E7
targetIP: 10.245.130.204
zvolWorkers: ""
status:
capacity:
totalAllocated: 14.0M
used: 51.6M
lastTransitionTime: "2019-11-29T13:13:30Z"
lastUpdateTime: "2019-12-02T13:44:03Z"
phase: Healthy
versionDetails:
autoUpgrade: false
desired: 1.4.0
status:
current: 1.4.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
STEP 5:
Edit the associated deployment for the cStor pool and update the hostname in the nodeSelector.
To list the deployments in openebs namespace, execute:
kubectl get deployment -n openebs
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
cstor-disk-pool-ep0q 0/1 1 0 84m
cstor-disk-pool-j1bz 0/1 1 0 84m
cstor-disk-pool-ufo7 0/1 1 0 84m
openebs-admission-server 1/1 1 1 6h22m
openebs-apiserver 1/1 1 1 6h22m
openebs-localpv-provisioner 1/1 1 1 6h22m
openebs-ndm-operator 1/1 1 1 6h22m
openebs-provisioner 1/1 1 1 6h22m
openebs-snapshot-operator 1/1 1 1 6h22m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-target 1/1 1 1 82m
The deployment associated with the cStor pool can be identified from the name. In this example, for cStor pool cstor-disk-pool-ep0q the associated deployment is cstor-disk-pool-ep0q.
To edit the cStor deployment execute the following command:
kubectl edit deploy cstor-disk-pool-ep0q -n openebs
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
openebs.io/monitoring: pool_exporter_prometheus
creationTimestamp: "2019-11-29T11:35:14Z"
generation: 2
labels:
app: cstor-pool
openebs.io/cas-template-name: cstor-pool-create-default-1.4.0
openebs.io/cstor-pool: cstor-disk-pool-ep0q
openebs.io/storage-pool-claim: cstor-disk-pool
openebs.io/version: 1.4.0
name: cstor-disk-pool-ep0q
namespace: openebs
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: CStorPool
name: cstor-disk-pool-ep0q
uid: 4ee586b4-014d-474f-b720-8ba9cbec62a3
resourceVersion: "69706"
selfLink: /apis/apps/v1/namespaces/openebs/deployments/cstor-disk-pool-ep0q
uid: 14790d13-8e67-4974-b0e9-4c2702522371
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cstor-pool
strategy:
type: Recreate
template:
metadata:
annotations:
openebs.io/monitoring: pool_exporter_prometheus
prometheus.io/path: /metrics
prometheus.io/port: "9500"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: cstor-pool
openebs.io/cstor-pool: cstor-disk-pool-ep0q
openebs.io/storage-pool-claim: cstor-disk-pool
openebs.io/version: 1.4.0
spec:
containers:
- env:
- name: OPENEBS_IO_CSTOR_ID
value: 4ee586b4-014d-474f-b720-8ba9cbec62a3
image: quay.io/openebs/cstor-pool:1.4.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- sleep 2
livenessProbe:
exec:
command:
- /bin/sh
- -c
- zfs set io.openebs:livenesstimestamp="$(date +%s)" cstor-$OPENEBS_IO_CSTOR_ID
failureThreshold: 3
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: cstor-pool
ports:
- containerPort: 12000
annotations:
openebs.io/monitoring: pool_exporter_prometheus
prometheus.io/path: /metrics
prometheus.io/port: "9500"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: cstor-pool
openebs.io/cstor-pool: cstor-disk-pool-ep0q
openebs.io/storage-pool-claim: cstor-disk-pool
openebs.io/version: 1.4.0
spec:
containers:
- env:
- name: OPENEBS_IO_CSTOR_ID
value: 4ee586b4-014d-474f-b720-8ba9cbec62a3
image: quay.io/openebs/cstor-pool:1.4.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- sleep 2
livenessProbe:
exec:
command:
- /bin/sh
- -c
- zfs set io.openebs:livenesstimestamp="$(date +%s)" cstor-$OPENEBS_IO_CSTOR_ID
failureThreshold: 3
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: cstor-pool
ports:
- containerPort: 12000
imagePullPolicy: IfNotPresent
name: cstor-pool-mgmt
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev
name: device
- mountPath: /tmp
name: tmp
- mountPath: /var/openebs/sparse
name: sparse
- mountPath: /run/udev
name: udev
- args:
- -e=pool
command:
- maya-exporter
image: quay.io/openebs/m-exporter:1.4.0
imagePullPolicy: IfNotPresent
name: maya-exporter
ports:
- containerPort: 9500
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev
name: device
- mountPath: /tmp
name: tmp
- mountPath: /var/openebs/sparse
name: sparse
- mountPath: /run/udev
name: udev
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/hostname: chandan-pool-6kd6
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: openebs-maya-operator
serviceAccountName: openebs-maya-operator
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /dev
type: Directory
name: device
- hostPath:
path: /var/openebs/sparse/shared-cstor-disk-pool
type: DirectoryOrCreate
name: tmp
- hostPath:
path: /var/openebs/sparse
type: DirectoryOrCreate
name: sparse
- hostPath:
path: /run/udev
type: Directory
name: udev
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-11-29T13:02:08Z"
lastUpdateTime: "2019-11-29T13:02:08Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-11-29T11:35:14Z"
lastUpdateTime: "2019-11-29T13:02:08Z"
message: ReplicaSet "cstor-disk-pool-ep0q-6db9cc9877" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Once the CSP, CVR, Deployment are updated for the cStor pool, the cStor pool pod will be in running state.
To view all the pods in openebs namespace, execute:
kubectl get pods -n openebs
Output:
NAME READY STATUS RESTARTS AGE
cstor-disk-pool-ep0q-6db9cc9877-kgxzh 3/3 Running 0 3m32s
cstor-disk-pool-j1bz-54fd7d866b-7tsgp 0/3 Pending 0 54m
cstor-disk-pool-ufo7-6b8cc566b4-dglf9 0/3 Pending 0 65m
openebs-admission-server-7c7f87c96f-4svhh 1/1 Running 0 59m
openebs-apiserver-bf55cd997-c84mg 1/1 Running 0 54m
openebs-localpv-provisioner-6979bcf5b5-xnsml 1/1 Running 0 54m
openebs-ndm-bcz8k 1/1 Running 0 54m
openebs-ndm-operator-7d86667447-dmqkv 1/1 Running 0 59m
openebs-ndm-rh8fs 1/1 Running 0 50m
openebs-ndm-t952t 1/1 Running 0 60m
openebs-provisioner-7cc5cbb998-k89lv 1/1 Running 0 54m
openebs-snapshot-operator-9477b744d-x5v4d 2/2 Running 0 54m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-target-696495cf94njqqm 3/3 Running 0 59m
STEP 6:
Follow the same steps (1-5) for the other 2 cStor pool as well. Once all of them are updated, all the cStor pool pods will be in running state and the CVRs will be in healthy state. Also, the application pod will start running.
To check the status of pods running in openebs, execute:
kubectl get pods -n openebs
Output:
NAME READY STATUS RESTARTS AGE
cstor-disk-pool-ep0q-6db9cc9877-kgxzh 3/3 Running 0 16m
cstor-disk-pool-j1bz-898dd896f-dj4gd 3/3 Running 0 5m2s
cstor-disk-pool-ufo7-69b76948cc-7wcnd 3/3 Running 0 12s
openebs-admission-server-7c7f87c96f-4svhh 1/1 Running 0 72m
openebs-apiserver-bf55cd997-c84mg 1/1 Running 0 66m
openebs-localpv-provisioner-6979bcf5b5-xnsml 1/1 Running 0 66m
openebs-ndm-bcz8k 1/1 Running 0 67m
openebs-ndm-operator-7d86667447-dmqkv 1/1 Running 0 72m
openebs-ndm-rh8fs 1/1 Running 0 63m
openebs-ndm-t952t 1/1 Running 0 72m
openebs-provisioner-7cc5cbb998-k89lv 1/1 Running 0 66m
openebs-snapshot-operator-9477b744d-x5v4d 2/2 Running 0 66m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-target-696495cf94njqqm 3/3 Running 0 72m
To check the status of CVR, execute:
kubectl get cvr -n openebs
Output:
NAME USED ALLOCATED STATUS AGE
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ep0q 51.6M 14.0M Healthy 98m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-j1bz 51.6M 14.0M Healthy 98m
pvc-369a1c55-dabc-4bab-9037-925f2d1273e5-cstor-disk-pool-ufo7 54.0M 15.3M Healthy 98m
Now, to verify if the pods are running execute:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
percona-767db88d9d-ff77g 1/1 Running 0 95m