Introduction
Before going to select the block device, you should understand the mount status and type of disks which can be identified by NDM.
A brief description of disk mount status on Node
cStor can consume disks that are attached (are visible to OS as SCSI devices) to the Nodes which does not have any filesystem and which are unmounted on the Node. It is good to wipe out the disk if you use existing disks for cStor pool creation.
In case you need to use Local SSDs as block devices, you will have to first unmount them and remove any the filesystem if it has. On GKE, the Local SSDs are formatted with ext4 and mounted under "/mnt/disks/". If local SSDs are not unmounted and not removed the file system, then cStor pool creation will fail.
The following is an output of `lsblk` command in a node.
root@gke-ranjith-jiva-default-pool-5620c11e-5q4q:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 45G 0 disk
├─sda1 8:1 0 44.9G 0 part /
├─sda14 8:14 0 4M 0 part
└─sda15 8:15 0 106M 0 part /boot/efi
sdb 8:16 0 40G 0 disk
In the above example, `sdb` device can be used for creating cStor storage pool on the corresponding Node where the block device is attached since the blockdevice is not mounted or contains any filesystem.
A brief description of disk partition on Node
As of with current OpenEBS version, NDM will filter out partition disk path(say /dev/sdb1) so it will not create `blockdevice` CR. It will create blockdevice CR for its parent disk(say /dev/sdb). If you need to use a partitioned device path, then you need to create custom `blockdevice` CR using the manual method and NDM will not manage these CRs.
The following is an output of `lsblk` command in a node.
root@gke-ranjith-jiva-default-pool-5620c11e-5q4q:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 45G 0 disk
├─sda1 8:1 0 44.9G 0 part /
├─sda14 8:14 0 4M 0 part
└─sda15 8:15 0 106M 0 part /boot/efi
sdb 8:16 0 40G 0 disk
sdc 8:32 0 40G 0 disk
└─sdc1 8:33 0 40G 0 part
In the above example, `sdc` cannot be taken for cStor pool creation since it has a partition.
How to wipe out a disk if it contains a filesystem or partition
If you need to use a disk which contains a file system or a mounted disk or a partition disk, you can wipe out the disks using the following command. Note that, this command will erase the data and file system and make a new disk.
wipefs -af <device_path_on_node>
Example:
wipefs -af /dev/sdc
Note: If it is a mounted disk, you need to unmount the disk first before proceeding with the above command.
Selection of blockDevice CR
You can get the blockDevice CR details using the following command
kubectl get bd -n <openebs_installed_namespace>
Example:
kubectl get bd -n openebs
Example Output:
NAME SIZE CLAIMSTATE STATUS AGE
blockdevice-1c10eb1bb14c94f02a00373f2fa09b93 42949672960 Unclaimed Active 43s
blockdevice-77f834edba45b03318d9de5b79af0734 42949672960 Unclaimed Active 42s
blockdevice-936911c5c9b0218ed59e64009cc83c8f 42949672960 Unclaimed Active 42s
sparse-19dd32c40806c3521c6868d171a9488c 10737418240 Unclaimed Active 43s
sparse-42ce4eaa60e083a1d1f814cf99e87cc8 10737418240 Unclaimed Active 42s
sparse-6b75f7ec99b72df813be068727973c7d 10737418240 Unclaimed Active 44s
This is the output of a 3 Node cluster where one is having single gpd disk and another node is having 2 GPD. The third node does not contain any disk. So it will create 3 blockdevice CR for these 3 GPD.
Choose blockdevice CR which doesn't have file system and unmounted. This can be checked using the following command.
kubectl describe bd -n openebs <blockedvice_CR>
Example:
kubectl describe bd -n openebs blockdevice-77f834edba45b03318d9de5b79af0734
By getting kubernetes.io/hostname
from Labels
and Spec.Path
, you can find the disk on Node whether it contains partition or not. From the above example, blockdevice-77f834edba45b03318d9de5b79af0734
CR is not a right candidate for cStor pool creation. Other 2 blockdevice CR can be chosen for cStor pool creation.
Also by checking Spec.Filesystem
, you can find whether disk contains any file system. You can avoid these blockdevice CR for creating a cStor pool. For example, if any disk is mounted on a node as test under the root directory and contains an ext4 file system, then from the description of corresponding blockdevice CR will give the following information. This is a snippet from the describe command of the corresponding blockdevice CR. In this case, this disk is also not the right candidate for cStor pool creation.
Filesystem:
Fs Type: ext4
Mount Point: /root/test
In general, identify block devices which are unclaimed, unmounted on a node and does not contain any file system for creating cStor pools.