This article delineates the process of creating a Kubernetes cluster in Azure VMs using the kubeadm tool.
- This tutorial uses Weave Net as a CNI.
- This tutorial works well with Kubernetes v1.16.3 and Weave Net v2.6.0.
- This tutorial assumes the use of Ubuntu 18.04 LTS nodes in the cluster.
- This tutorial assumes that you have access to a bash terminal to use SSH to log in to the nodes remotely.
Before you begin...
- This tutorial is based on the Azure Cloud Shell. You can use the Azure Cloud Shell in two ways:
- By going to shell.azure.com.
- By going to portal.azure.com and clicking on the
icon. The Azure Portal can be used as a handy monitoring tool.
- Refer to 'Things to watch out for' to get detailed precautionary steps before proceeding with Weave Net.
Creating the nodes
STEP 1
All Azure resources exist within resource groups. If you already have a resource group, you may skip this step and move on to STEP 2. You can list existing resource groups using the command:
az group list -o table
If you do not have a resource group, create a resource group using the following command. Substitute <location>
for the region name and <resource-group-name>
for the name of the resource group.
az group create --name <resource-group-name> --location <location>
Sample Command:
az group create --name myResourceGroup --location eastus
az group list -o table
Sample Output:
Name Location Status
-------------------------------- ------------ ---------
cloud-shell-storage-centralindia centralindia Succeeded
DefaultResourceGroup-CUS centralus Succeeded
myResourceGroup eastus Succeeded
DefaultResourceGroup-CIN centralindia Succeeded
STEP 2
We will need a virtual network for our nodes. Use the following command to list all available virtual networks in the resource group:
az network vnet list -g <resource-group-name>
Use the following command to list all available subnets in the virtual network:
az network vnet subnet list -g <resource-group-name> --vnet-name <vnet-name>
To create a virtual network and also a subnet for our cluster, use the following command:
az network vnet create -g <resource-group-name> -n <vnet-name> --address-prefix <network-cidr> --subnet-name <subnet-name> --subnet-prefix <subnet-cidr>
Sample command:
az network vnet create -g myResourceGroup -n myVNet --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.0.0/24
We verify the creation of the virtual network with:
az network vnet list -g MyResourceGroup -o table
Sample output:
Name ResourceGroup Location NumSubnets Prefixes DnsServers DDOSProtection VMProtection
------ --------------- ---------- ------------ ----------- ------------ ---------------- --------------
myVNet myResourceGroup eastus 1 10.0.0.0/16 False False
We verify the creation of the subnet with:
az network vnet subnet list -g myResourceGroup --vnet-name myVNet -o table
Sample output:
AddressPrefix Name PrivateEndpointNetworkPolicies PrivateLinkServiceNetworkPolicies ProvisioningState ResourceGroup
--------------- -------- -------------------------------- ----------------------------------- ------------------- ---------------
10.0.0.0/24 mySubnet Enabled Enabled Succeeded myResourceGroup
STEP 3
Next, we create the master node. We will use the az vm create -g <resource-group-name> -n <instance-name> --image UbuntuLTS
command. Some of the other flags that can be used with this command are:
--size <size-name>
:
This one of the predefined Azure VM templates. You can see all the available VM sizes in your region by executing the commandaz vm list-sizes -l <location> -o table
.--generate-ssh-keys
/--ssh-key-values <ssh-key>
:
The default way of communicating with an Azure linux VM is via SSH. You can use the--generate-ssh-keys
flag to generate an SSH key during VM creation, and then use it to authenticate your SSH session with the VM. Alternatively, you can use the--ssh-key-values
flag to use an SSH public key you already have.--admin-username <password>
/--admin-password <password>
:
These flags let you choose the username and/or password of the VM. In our example, we have used the--admin-username
flag to select a username. We have used SSH authentication (default option) instead of password authentication in our example.--data-disk-sizes-gb <size-gb>
:
This is used to attach data disks to the VM. Separate values for separate disks can be given while being separated by spaces.--storage-sku
:
This is the type of disk that is going to be attached to the VM. You can also specify multiple SKUs and also specify what type of OS-disk that is going to be attached. Click here to know more about the syntax. There are 4 types of disk SKUs at the moment:- Ultra SSD (
UltraSSD_LRS
) - Premium SSD (
Premium_LRS
) - Standard SSD (
StandardSSD_LRS
) - Standard HDD (
Standard_LRS
)
- Ultra SSD (
--os-disk-size-gb <osdisk-size-gb>
:
This is the size of the OS-disk.--public-ip-address-dns-name <dns-name>
:
This is the DNS name for the public IP assigned to the VM.--zone <zone-number>
:
Zone number is used when there is a need to create a cross-zone cluster, or if a specific zone is preferred among several availability zones. Also, your region must support the use of availability zones. To check if your region supports availability zones, click here.
Execute the following command to create the VM:
az vm create -g <resource-group-name> -n <vm-name> --admin-username <username> --image UbuntuLTS --size <size-name> --subnet $(az network vnet subnet list -g <resource-group-name> --vnet-name <vnet-name> --query [].id -o tsv) --ssh-key-values <ssh-key>
To verify the creation of the VM, execute:
az vm list -g <resource-group-name> -d -o table
Sample command:
az vm create -g myResourceGroup -n ubuntu-master --admin-username master --image UbuntuLTS --size Standard_B2ms --subnet $(az network vnet subnet list -g myResourceGroup --vnet-name myVNet --query [].id -o tsv) --storage-sku Standard_LRS --ssh-key-values "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDP+nm/U4pkLeJS6thRDgxkwqTq80Npq0+58XYybEU50gflf2b/yIHBU25+kvCXSvEkFhPVQbdDJjZaEG4qqgoMdmpDCPP28se70+BXbcf8UoK6JyAoPaO5vGZ5bAG6PVbP1OZ0ldmzMB/eOec1DB/Z8PASInQOfCYPh005fWVuxaIEGlN/EekrDfKp7XY9toquQ0D3arGSSmZpUEeLpu/oZzcNSauiz3fw65mRWkap+hDp/OdkeptXvsSQ/fFTKQwrb+ndida2PvwSARG55e0/8IhKZV6iJKEoWaZW6EHoYSdRqBlUTWHP0HiDIrmxrAS6n50Nc4ofyod/e2htRdTpOhvYM4EDmfbCmlXQmacGJ2qa1GMkCvIRKc/kzVcJHOwbJ1R80V7qXjxlrJ7ok8TyD466ljGt8R3CPu4nhq786RU6don3bDWnLBZueHmQqp7r4hDuLyVbBtgzZMNHzW3pe88KxHfQiRtjWxSnSjExJx1ZOavXyQRuKXe2MKMvqX/bAdSGA4ldtoGgLRnubN0mWbJgTPzkInV/FQgKukCJYzE/RPoLnNiByFa2iQQ2a3ocqJUoJIw9VLXBmjTAvrAort8ieEnUvhkyKb0mPwlcVrB8KA8wlzIhJyWhOl3o9BPOEJspsdEe3DkG2PhHEmzf6fm8s074NFx1zgTAfxhehQ== abcde"
##Use a unique SSH key.
##By default SSH keys are stored in $HOME/.ssh directory, in linux.
Execute the following command to verify the creation of the VM:
az vm list -g myResourceGroup -d -o table
Sample Output:
Name ResourceGroup PowerState PublicIps Fqdns Location Zones
-------------- --------------- ------------ --------------- ------- ---------- -------
ubuntu-master myResourceGroup VM running 104.211.41.74 eastus
STEP 4
Next, we create the worker nodes in a way similar to how we created the master node.
Sample Command:
az vm create -g myResourceGroup -n ubuntu-worker1 --admin-username worker1 --image UbuntuLTS --size Standard_B2ms --subnet $(az network vnet subnet list -g myResourceGroup --vnet-name myVNet --query [].id -o tsv) --storage-sku Standard_LRS --ssh-key-values "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDP+nm/U4pkLeJS6thRDgxkwqTq80Npq0+58XYybEU50gflf2b/yIHBU25+kvCXSvEkFhPVQbdDJjZaEG4qqgoMdmpDCPP28se70+BXbcf8UoK6JyAoPaO5vGZ5bAG6PVbP1OZ0ldmzMB/eOec1DB/Z8PASInQOfCYPh005fWVuxaIEGlN/EekrDfKp7XY9toquQ0D3arGSSmZpUEeLpu/oZzcNSauiz3fw65mRWkap+hDp/OdkeptXvsSQ/fFTKQwrb+ndida2PvwSARG55e0/8IhKZV6iJKEoWaZW6EHoYSdRqBlUTWHP0HiDIrmxrAS6n50Nc4ofyod/e2htRdTpOhvYM4EDmfbCmlXQmacGJ2qa1GMkCvIRKc/kzVcJHOwbJ1R80V7qXjxlrJ7ok8TyD466ljGt8R3CPu4nhq786RU6don3bDWnLBZueHmQqp7r4hDuLyVbBtgzZMNHzW3pe88KxHfQiRtjWxSnSjExJx1ZOavXyQRuKXe2MKMvqX/bAdSGA4ldtoGgLRnubN0mWbJgTPzkInV/FQgKukCJYzE/RPoLnNiByFa2iQQ2a3ocqJUoJIw9VLXBmjTAvrAort8ieEnUvhkyKb0mPwlcVrB8KA8wlzIhJyWhOl3o9BPOEJspsdEe3DkG2PhHEmzf6fm8s074NFx1zgTAfxhehQ== abcde"
##Use a unique SSH key.
##By default SSH keys are stored in $HOME/.ssh directory, in linux.
az vm create -g myResourceGroup -n ubuntu-worker2 --admin-username worker2--image UbuntuLTS --size Standard_B2ms --subnet $(az network vnet subnet list -g myResourceGroup --vnet-name myVNet --query [].id -o tsv) --storage-sku Standard_LRS --ssh-key-values "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDP+nm/U4pkLeJS6thRDgxkwqTq80Npq0+58XYybEU50gflf2b/yIHBU25+kvCXSvEkFhPVQbdDJjZaEG4qqgoMdmpDCPP28se70+BXbcf8UoK6JyAoPaO5vGZ5bAG6PVbP1OZ0ldmzMB/eOec1DB/Z8PASInQOfCYPh005fWVuxaIEGlN/EekrDfKp7XY9toquQ0D3arGSSmZpUEeLpu/oZzcNSauiz3fw65mRWkap+hDp/OdkeptXvsSQ/fFTKQwrb+ndida2PvwSARG55e0/8IhKZV6iJKEoWaZW6EHoYSdRqBlUTWHP0HiDIrmxrAS6n50Nc4ofyod/e2htRdTpOhvYM4EDmfbCmlXQmacGJ2qa1GMkCvIRKc/kzVcJHOwbJ1R80V7qXjxlrJ7ok8TyD466ljGt8R3CPu4nhq786RU6don3bDWnLBZueHmQqp7r4hDuLyVbBtgzZMNHzW3pe88KxHfQiRtjWxSnSjExJx1ZOavXyQRuKXe2MKMvqX/bAdSGA4ldtoGgLRnubN0mWbJgTPzkInV/FQgKukCJYzE/RPoLnNiByFa2iQQ2a3ocqJUoJIw9VLXBmjTAvrAort8ieEnUvhkyKb0mPwlcVrB8KA8wlzIhJyWhOl3o9BPOEJspsdEe3DkG2PhHEmzf6fm8s074NFx1zgTAfxhehQ== abcde"
##Use a unique SSH key.
##By default SSH keys are stored in $HOME/.ssh directory, in linux.
az vm list -g myResourceGroup -d -o table
Sample Output:
Name ResourceGroup PowerState PublicIps Fqdns Location Zones
-------------- --------------- ------------ --------------- ------- ---------- -------
ubuntu-master myResourceGroup VM running 104.211.41.74 eastus
ubuntu-worker1 myResourceGroup VM running 13.92.247.73 eastus
ubuntu-worker2 myResourceGroup VM running 137.135.126.127 eastus
Here we can see the public IP addresses of each VM. We will need these IP addresses in the next step.
STEP 5
In this step, we will need to establish an SSH connection with the VMs using their public IP address.
Working example:
We will use a bash terminal in an SSH-enabled linux system and use the ssh
command on it, to connect to the VMs. We will use 3 instances of this terminal to be able to reach all 3 nodes. The path to the SSH public key that was used while creating these VMs is $HOME/.ssh/id_rsa.pub
.
ssh -i ~/.ssh/id_rsa.pub master@104.211.41.74
ssh -i ~/.ssh/id_rsa.pub worker1@13.92.247.73
ssh -i ~/.ssh/id_rsa.pub worker2@137.135.126.127
We get 3 terminals to the 3 nodes.
STEP 6
In this step, we will update the apt package lists for all the repositories and PPAs for all the nodes. Execute the following command in all 3 nodes:
sudo apt-get update
STEP 7
In this step, we will install the latest version of docker-ce on all 3 nodes. We will execute the following command on all 3 nodes:
sudo apt install docker.io -y
We will start the Docker daemon on all 3 nodes with the following command:
sudo systemctl enable docker
STEP 8
In this step, we are going to add the apt repository for kubeadm. Execute the following commands on all 3 nodes:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
STEP 9
In this step, we are going to install kubeadm, kubectl and kubelet. Execute the following command on all 3 nodes:
sudo apt install -y kubeadm=1.16.3-00 kubectl=1.16.3-00 kubelet=1.16.3-00
STEP 10
In this step, we are going to initialize kubeadm. Execute the following command on the master node's shell:
sudo kubeadm init --kubernetes-version=1.16.3
The output will contain the kubeadm join
command. We will need this command to join the worker nodes to the master.
Sample output:
kubeadm join 10.0.0.4:6443 --token wa9zrv.uj0s6esoaulzxs1x \
--discovery-token-ca-cert-hash sha256:342f431eb21e753e7cc14b5a175b882a131824aadbf7694e2dae47a613c1507d
STEP 11
In this step, we will copy the admin.conf
file to the user's home directory and change it's ownership. This will allow us to use the kubectl
command without root privileges. Execute the following commands on the master node's terminal:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
STEP 12
In this step, we will execute the kubeadm join
command from the worker nodes. Execute the kubeadm join
command that you received in the output of kubeadm init
. You will need to add sudo
, as it requires root privileges.
Sample command:
sudo kubeadm join 10.0.0.4:6443 --token wa9zrv.uj0s6esoaulzxs1x \ --discovery-token-ca-cert-hash sha256:342f431eb21e753e7cc14b5a175b882a131824aadbf7694e2dae47a613c1507d
Sample output:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
STEP 13
In this step, we will install the CNI plugin for the cluster. The CNI plugin we will use is Weave Net. Execute the following command in the master node's shell to install the latest version on Weave Net:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Sample output:
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
STEP 14
In this step, we will verify if the worker nodes are connected to the master, and if the pods in the kube-system
namespace are running. Execute the following commands on the master's terminal:
kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
ubuntu-master Ready master 4m21s v1.16.3
ubuntu-worker1 Ready 73s v1.16.3
ubuntu-worker2 Ready 69s v1.16.3
kubectl get pods -n kube-system
Sample output:
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-57hqv 1/1 Running 0 8m23s
coredns-5644d7b6d9-5b5pk 1/1 Running 0 8m23s
etcd-ubuntu-master 1/1 Running 0 7m17s
kube-apiserver-ubuntu-master 1/1 Running 0 7m28s
kube-controller-manager-ubuntu-master 1/1 Running 0 7m30s
kube-proxy-24z6h 1/1 Running 0 5m20s
kube-proxy-4nz6n 1/1 Running 0 5m25s
kube-proxy-jv99p 1/1 Running 0 8m23s
kube-scheduler-ubuntu-master 1/1 Running 0 7m16s
weave-net-nzn5g 2/2 Running 0 5m4s
weave-net-rm9js 2/2 Running 0 5m4s
weave-net-s49fm 2/2 Running 0 5m4s
Note:
- Steps 6-9 are for all the nodes.
- Steps 9 and 10 are only for the master node.
- Step 11 is only for the worker nodes.
Note: It is a good idea to execute sudo systemctl enable iscsid && sudo systemctl start iscsid
on all the nodes at the end of the procedure, as iscsid might not be active.