Topics covered in this article:
What is TLS?
Transport Layer Security, or TLS, is a widely adopted security protocol designed to facilitate privacy and data security for communications over the Internet. A primary use case of TLS is encrypting the communication between web applications and servers, such as web browsers loading a website. Hence, utilizing TLS for Kubera OnPrem makes it secure.
Steps to configure TLS for Kubera OnPrem
NOTE: The below steps are designed to configure TLS assuming Kubera OnPrem setup is already present on the cluster.
- It is recommended to first map your domain as this might be time consuming. Mapping means binding IP with the domain URL. It can be carried out using various domain registrars such as Namecheap.com, Bluehost.com, HostgGator.com etc. Time required to map depends upon the domain registrar used. To get IP of the node hosting Kubera OnPrem, execute:
kubectl get ing -n <kubera_namespace>
Ping your domain to ensure that mapping has been carried out properly.ping <domain name>
Sample Output:64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=2 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=3 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=4 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=5 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=6 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=7 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=8 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=9 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=10 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=11 ttl=44 time=185 ms 64 bytes from 33.35.232.35.bc.googleusercontent.com (35.232.35.33): icmp_seq=12 ttl=44 time=185 ms
- Initially, the service type for "ingress-nginx" needs to be NodePort, to verify execute the following command:
kubectl get svc -n ingress-nginx
Output:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.84.6.220 <none> 80/TCP 8m52s
ingress-nginx NodePort 10.84.13.145 <none> 80:30380/TCP,443:30381/TCP 8m52s
Next, you need to apply ingress-service YAML given below. Copy the below code in a YAML file, say, ingress-service.yaml
kind: Service
Now, apply the above YAML. To apply, execute:
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: httpskubectl apply -f ingress-service.yaml
To ensure that the above steps have been correctly executed, check service details, now the service type should be LoadBalancer.
To get details of services, execute:kubectl get svc -n ingress-nginx
Output:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.0.110.243 <none> 80/TCP 14m ingress-nginx LoadBalancer 10.0.105.208 35.232.35.33 80:30380/TCP,443:30381/TCP 14m
To get details of ingress, execute:kubectl get ing -n <kubera_namespace>
Output:NAME HOSTS ADDRESS PORTS AGE maya-auth-ingress * 35.232.35.33 80, 443 9m47s maya-elasticsearch-ingress * 35.232.35.33 80, 443 9m47s maya-nginx-ingress * 35.232.35.33 80, 443 9m47s
- Now, cert-manager is to be deployed. Cert-manager is a Kubernetes add-on used to automate the management and issuance of TLS certificates from various issuing sources. It is recommended to deploy cert-manager in "cert-manager" namespace. In case you need to deploy it in another namespace, the issuer.yaml (in step 8) must be customised accordingly.
To create cert-manager ns, execute:kubectl create namespace cert-manager
Next, we need to deploy the cert-manager. Before deploying, ensure you must have admin access over the cluster, if not, execute the following command:kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
To deploy cert-manager execute the following:kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
Note: If you are running Kubernetes v1.15 or below, you need to add --validate=false flag to your kubectl apply command else you will receive a validation error relating to the x-kubernetes-preserve-unknown-fields field in cert-manager’s CustomResourceDefinition resources.
Ensure all the pods in cert-manager namespace are running. To get the list of pods, execute:kubectl get pods -n cert-manager
Output:NAME READY STATUS RESTARTS AGE cert-manager-7df5c67dfc-tsswf 1/1 Running 0 20s cert-manager-cainjector-f54c57bf8-r6rgl 1/1 Running 0 21s cert-manager-webhook-76794c6967-smr4n 1/1 Running 0 19s
- All the ingresses have to be patched with hostname. Specify the domain URL that will be mapped to the node IP where Kubera OnPrem is deployed. To get the list of ingress, execute:
kubectl get ing -n <kubera_namespace>
Output:NAME HOSTS ADDRESS PORTS AGE maya-auth-ingress * 35.232.35.33 80, 443 17m maya-elasticsearch-ingress * 35.232.35.33 80, 443 17m maya-nginx-ingress * 35.232.35.33 80, 443 17m
All the above ingresses have to be patched with your domain URL(say, director.mayadata.io), as shown below:spec: rules: - host: director.mayadata.io
To open these ingress in editable format, execute:kubectl edit ing -n <kubera_namespace> <name_of_ingress>
This opens up the file in an editable format.
Once these changes have been made execute the following to verify if the host name is updated:kubectl get ing -n <kubera_namespace>
The output must now display the host name that had been patched to each ingress. Sample output is given below:NAME HOSTS ADDRESS PORTS AGE maya-auth-ingress bare.mayadatastaging.io 35.232.35.33 80, 443 4h28m maya-elasticsearch-ingress bare.mayadatastaging.io 35.232.35.33 80, 443 4h28m maya-nginx-ingress bare.mayadatastaging.io 35.232.35.33 80, 443 4h28m
- Next, verify all the pods present in ingress-nginx namespace must be in running state.
To get these pods in running might take upto few minutes.
To get the list of pods, execute:kubectl get pods -n ingress-nginx
Output:NAME READY STATUS RESTARTS AGE default-http-backend-5d4f569658-f7f62 1/1 Running 0 4h36m nginx-ingress-controller-84f576597c-rtnxv 1/1 Running 0 4h36m
- Patch ingress with tls under spec, as shown below
spec: tls: - hosts: - <host_name> secretName: <secret_name>
- Add cert-manager.io/cluster-issuer: letsencrypt-prod under annotations.
- Next, create an issuer using the below YAML. Copy the below YAML in a file say, issuer.yaml
apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: xyz.abc@gmail.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx
To apply the above YAML, execute:kubectl apply -f <yaml_name>
- Next, edit maya and grafana configmaps. To get the list of configmap, execute:
kubectl get cm -n <kubera_namespace>
Sample output:NAME DATA AGE elastalert-config 1 5h24m elastalert-rules 2 5h24m elasticsearch-curator-action 1 5h18m elasticsearch-curator-config 1 5h18m elasticsearch-internalusers 1 5h24m maya-config 1 5h24m maya-grafana-cfgmap 1 5h24m mysql 2 5h24m
From the above displayed configmap maya-config and maya-grafana-cfgmap needs to be edited.
For maya-config, all the places where IP address is displayed must be replaced with https://domain_name. To open maya-config configmap in editable format execute:kubectl edit cm -n kubera maya-config
Next, edit grafana cm. Replace IP present under grafana.ini with your domain name(eg. director.mayadata.io). To open in editable format, execute:kubectl edit cm -n <kubera_namespace> maya-grafana-cfgmap
- To apply the changes made in Step 9 maya-io and maya-grafana pods have to be restarted. To restart execute the following command:
kubectl delete pod -n <pod_name>
- Now, you can access your Kubera OnPrem setup using your domain URL (in this example, director.mayadata.io) in your browser.
Users might face issues with grafana that results in unavailability of metrics on the dashboard, if so please contact our support.