This is a step-by-step guide for setting up Director OnPrem on a Rancher-managed kubernetes cluster. We will be using Google's GCP for our nodes.
- This guide works well with Kubernetes v1.16.3, Rancher v2.3.3, OpenEBS Director OnPrem v1.7.0.
- We will be using four Ubuntu 18.04 LTS nodes for the cluster.
Before you begin...
- You will need 4 nodes with Ubuntu 18.04 installed. We will be using GCP VM instances. To use GCP nodes, you can use the GCP Console at cloud.google.com.
- You will need to add firewall rules such that port 22, 80, 443 are open for incoming TCP network traffic.
- You will need to be able to access the VMs via SSH from the GCP Console or any other terminal.
Installing Rancher server
Follow to steps below to install the Rancher server on one of the nodes.
STEP 1
In this step we will install Docker-CE on all of the nodes. Execute the following commands on all 4 nodes to add the docker repository.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Execute the following command on all of the nodes to update all package lists:
sudo apt-get update
Execute the following command on any one of the nodes to see all the available versions of docker-ce:
apt-cache policy docker-ce
Sample output:
docker-ce:
Installed: (none)
Candidate: 5:19.03.5~3-0~ubuntu-bionic
Version table:
5:19.03.5~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:19.03.4~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:19.03.3~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:19.03.2~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:19.03.1~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:19.03.0~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.9~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.8~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.7~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.6~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.5~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.4~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.3~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.2~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.1~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
5:18.09.0~3-0~ubuntu-bionic 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
18.06.3~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
18.06.2~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
18.06.1~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
18.06.0~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
18.03.1~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
Note: To learn about the supported docker versions for your version of Rancher 2.x, click here.
Execute the following command on all of the nodes to install docker-ce:
sudo apt-get install -y docker-ce=5:19.03.5~3-0~ubuntu-bionic
To verify that the docker service is running on the nodes. Execute the following command on all of the nodes:
systemctl status docker.service
Sample output:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
Active: active (running) since Fri 2019-12-27 19:15:46 IST; 26s ago
Docs: https://docs.docker.com
Main PID: 5180 (dockerd)
Tasks: 28
CGroup: /system.slice/docker.service
├─5180 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/contain
├─5434 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 4
└─5448 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8
Dec 27 19:15:39 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:39.4
Dec 27 19:15:39 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:39.4
Dec 27 19:15:39 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:39.4
Dec 27 19:15:39 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:39.4
Dec 27 19:15:40 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:40.4
Dec 27 19:15:43 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:43.4
Dec 27 19:15:45 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:45.0
Dec 27 19:15:45 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:45.2
Dec 27 19:15:46 niladri-Latitude-3560 systemd[1]: Started Docker Application Con
Dec 27 19:15:46 niladri-Latitude-3560 dockerd[5180]: time="2019-12-27T19:15:46.1
STEP 2
In this step we will be deploying the Rancher server as a docker container on one of the nodes. We will use this node as the master and etcd of our Rancher cluster.
Execute the following command on any one of the nodes to run the rancher server container:
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.3.3
You can verify that the Rancher server is running by opening your web browser and going to the IP address/domain name of the node.

Creating RKE cluster
STEP 1
Click on the 'Add Cluster' option.

STEP 2
Click on the 'From existing nodes (Custom)' option.

STEP 3
Fill in the details in the next page.

STEP 4
Make sure you have an SSH session with the nodes running. Copy the command after selecting the appropriate role for it and run it in the node's shell. We will be using our master node as the Control Plane and etcd node. The rest of the nodes will get the Worker role.

Installing OpenEBS
STEP 1
We will need to add some extra binds to the kubelet. Click on the Edit option from the drop-down list as shown.

STEP 2
We will add the following lines under services.kubelet
:
extra_binds: - /var/openebs/local:/var/openebs/local

Adding the extra binds...

STEP 3
Execute the following commands on the shell of all of the nodes:
modprobe iscsi_tcp
echo iscsi_tcp >/etc/modules-load.d/iscsi-tcp.conf
STEP 4
Click on the 'Default' project as shown in the image below.

STEP 5
Click on the 'Apps' option.

STEP 6
Click on the 'Launch' option.

STEP 7
Click on the OpenEBS app from among the many available apps.

STEP 8
In the next page, default options for OpenEBS will already be configured. You may proceed using those, or you can make changes according to your preferences.

STEP 9
Once installation completes, you can check if the components have been installed by clicking on Resources --> Workloads.

Installing Helm
We will be installing Helm v3.x on the master node. We are going to install Director OnPrem using helm charts.
We will use the script available here to get install the latest version of helm 3. Execute the following command on the master node's bash shell:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
To verify that helm has been installed, execute the following command:
helm version
Sample output:
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
Installing kubectl
We will install kubectl on the master node. This makes it easier to install Director OnPrem.
STEP 1
First we download the version of the kubectl binary that corresponds with the version of kubernetes in our RKE cluster. We have use v1.16.3, so we will be using kubectl v1.16.3. Execute the following command to download the binary:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.3/bin/linux/amd64/kubectl
STEP 2
Next, we will make the kubectl binary executable. Execute the following command:
chmod +x ./kubectl
STEP 3
Next, we move the binary into the /usr/local/bin
directory. Execute the following command:
sudo mv ./kubectl /usr/local/bin/kubectl
STEP 4
We will create a directory for the kubeconfig file. Execute the following command:
mkdir -p ~/.kube
We will copy the contents of the kubeconfig file and write it to ~/.kube/config
file.
vi ~/.kube/config
We can verify the installation by executing the following command:
kubectl version
Sample output:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Installing Director OnPrem
STEP 1
If you are an existing user login to MayaData user portal with your credentials, else SignUp to MayaData. Click here to get redirected to MayaData user portal.
To know the detailed steps on Sign Up Click Here.
STEP 2
Logging in, you will be redirected to the User Portal dashboard, now Click on Download OnPrem button on the dashboard under Director section (encircled in the image).
STEP 3
Now, you will receive a mail from MayaData Inc containing docker repository credentials. These will be needed in the later steps of installation.

We will SSH into the master node. The following commands are executed on the master node's shell.
We will clone the Director OnPrem-charts repo for installing Director OnPrem using helm. Execute the following command:
git clone https://github.com/mayadata-io/director-charts.git
STEP 5
Create a docker secret with provided credentials using the following command:
kubectl create secret docker-registry <secretname> --docker-server=registry.mayadata.io --docker-username=<username> --docker-password=<password> -n <custom_namespace>
example:
kubectl create ns director
kubectl create secret docker-registry mayadatasecret --docker-server=registry.mayadata.io --docker-username=mayaonprem@mayadata.io --docker-password=qD6EtVNxeol -n director
secretname is the name of the docker secret, user can provide any random name for it.
username and password are the credentials generated in the popup when a request for an evaluation version was made
custom_namespace is the namespace where docker secret is to be generated.
To verify the creation of the secret key execute the following command:
kubectl get secret -n <namespace>
Here, as the namespace is director so,
kubectl get secret -n director
[Optional] Only if GitHub credentials are to be used for Director OnPrem:
If the user needs to use GitHub credentials for Director OnPrem, create a GitHub OAuth application and have the client id, secret and application id ready. Registering a GitHub OAuth application can be done using the information provided here.
[ In Steps 6 and 8(in the above link) provide URL in http://<Node External IP>:<Port> format.
(For Example http://35.232.101.174:80.) ].
Ensure that you are inside director-charts/1.7.0
directory. Add details to the following parameters in values.yaml
:
STEP 7
Install Director OnPrem using following command. (Ensure that you are inside director-charts/1.7.0/
directory)
helm install <release-name> . --set server.url=<Server IP>:80 --set nginx-ingress.controller.kind=Deployment --set nginx-ingress.controller.service.enabled=false
example:
helm install director . --set server.url=147.75.84.137:30380 --set nginx-ingress.controller.kind=Deployment --set nginx-ingress.controller.service.enabled=false
Note: The same namespace is to be used for docker secret generation for the Director OnPrem installation.
STEP 8
Verify Director OnPrem pods are running using the following command. Director OnPrem pods will be running on namespace provided during the install time.
kubectl get pod -n <namespace>
values.yaml
of the format http://NodeExternalIP
http://35.232.101.174:80
NOTE:If you opted for local-auth, sign-in with username/email as "Administrator" and default password as "password".
kubectl -n <director-namespace> patch deployment configs-db --patch '{"spec": {"template": {"spec": {"containers": [{"name": "configs-db","env": [{"name": "POSTGRES_HOST_AUTH_METHOD","value": "trust"}]}]}}}}'