Install Kubernetes k3s

In a few steps you will have a private Kubernetes k3s up and running.


License and product key

Get your trial Orchesto License

Activate a free of charge trial license for the product of you choice at

Required hostname for Orchesto

You will need a hostname for Orchesto.

In our example we are going to use

Required VMs

3 nodes

Our example will run the following configuration:

VM Node Kubernetes Role Example IP addresses
Node 1 Master
Node 2 Agent 1
Node 3 Agent 2

Required IPs

The following network requirements needs to be satisfied before starting

  • IP adresses for the 3 nodes
  • Range of 10 free IP addresses for your kubernetes services

Our example has the following configuration

VM Node Example IP addresses
Node 1
Node 2
Node 3

IP Range for Kubernetes services, our example will use:

Required client software:


Prepare VMs (amd64) for k3s

  • Install Ubuntu 20.04 or newer on all nodes
    • Get IP addresses for your VMs
    • Get hostnames for your VMs
  • Copy over your ssh key with: ssh-copy-id < User >@< IP address to VM > to all nodes

Run the following on all nodes for passwordless access to root:

$ ssh < User >@< IP address to VM >
$ sudo su
# cp .ssh/authorized_keys ~/.ssh/
# exit
$ exit

Prepare Raspberry PI 4 (arm64) for k3s

  • Download to flash your sd cards
  • Flash all SD cards using Ubuntu 20.04 or later
  • Copy over your ssh key with: ssh-copy-id < User >@< IP address to RPI > to all Raspberry PI's ( default USER:ubuntu Password: ubuntu)
  • Optional: File might not exist. Run ssh < User >@< IP address to VM > and use favorite editor to change /boot/firmware/cmdline.txt to append cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 and save on all Raspberry PI's
  • Download and install the following package on all Raspberry PI's wget .deb .....

Deploy k3s on all nodes

Now we are going to install k3s.

Installation of the master node

Information of user to login to the nodes. We are using root in the following example.

On your computer

$ export NODEUSER="root"

IP address of node 1. We are using in our example.

$ export MASTER=

Note that k3sup will default to using ~/.ssh/id_rsa as your key. If you are using some other key you need to update the --ssh-key flag.

$ k3sup install \
  --ip $MASTER \
  --user $NODEUSER \
  --context zebware_k3s \
  --ssh-key "~/.ssh/id_rsa" \
  --k3s-extra-args '--with-node-id --no-deploy servicelb --no-deploy traefik' 

Configure kubectl on your computer with the correct cluster. From the directory you executed the above command, run:

$ export KUBECONFIG=$(pwd)/kubeconfig

Verify that your kubectl configuration is correct

$ kubectl get nodes

Example output

NAME                                   STATUS   ROLES    AGE   VERSION   Ready    master   18s   v1.19.4+k3s1

Now we have our kubernetes master up and running on node 1 and kubectl is connected to the correct cluster.

Installation of agent nodes

Now we are going to use node 2 and node 3 as agents in our kubernetes cluster.

VM Node Kubernetes Role Example IP addresses
Node 1 Master
Node 2 Agent 1
Node 3 Agent 2

We are going to use Node 2 and Node 3 as agents in the kubernetes cluster. We are using and for the agents in our example.

$ export AGENT_1=
$ export AGENT_2=
$ k3sup join --ip $AGENT_1 --server-ip $MASTER --user $NODEUSER
$ k3sup join --ip $AGENT_2 --server-ip $MASTER --user $NODEUSER

Now we can verify that the agent nodes are running

$ kubectl get nodes

Example output

NAME                                   STATUS   ROLES    AGE     VERSION   Ready    master   2m31s   v1.19.4+k3s1            Ready    <none>   60s     v1.19.4+k3s1            Ready    <none>   37s     v1.19.4+k3s1

Install a Loadbalancer in the cluster

Continue on by deploying metallb in your cluster. Metallb is a loadbalancer created for metal. More information is available here

$ kubectl apply -f
$ kubectl apply -f
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Verify that metallb is running. Wait until all pods are in a running state. This might take a while.

$ kubectl get pods -n metallb-system

Example output

NAME                          READY   STATUS    RESTARTS   AGE
speaker-6xq75                 1/1     Running   0          2m
speaker-jvl5x                 1/1     Running   0          2m
speaker-5hfrj                 1/1     Running   0          2m
controller-65db86ddc6-dstls   1/1     Running   0          2m

Layer 2 Configuration

Configure your IP pool to be used for Layer 2. If you are unsure about the address range, ask your network administrator. We are using the range to in our example.

$ export IP_POOL=

Add a metallb configuration with your IP_POOL

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2
      - $IP_POOL

Verify the ConfigMap

$ kubectl get configmap --namespace metallb-system

Example output

config   1      39s

Install Cert-manager

Cert-Manager is installed to automate certificate management for applications installed into the cluster

You can read more about cert-manager on their website Cert-Manager

Check for latest version of cert-manager versions. We are using v1.1.0 in this guide which has been verified to work.

export CERT_M_VERSION=v1.1.0
$ kubectl create namespace cert-manager
$ helm repo add jetstack
$ helm repo update
$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --version $CERT_M_VERSION --set installCRDs=true

Verify cert-manager

$ kubectl get pods --namespace cert-manager

Example output

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6d87886d5c-8x8lg              1/1     Running   0          34s
cert-manager-cainjector-55db655cd8-9zqsx   1/1     Running   0          34s
cert-manager-webhook-6846f844ff-k2p4l      1/1     Running   0          34s

Install Orchesto with our Helm chart

Preparation is now completed and you should have the following:

  1. A licenses key and a product key
  2. K3s installed and running
  3. kubectl configured to your cluster
  4. helm available on your commandline

Download our helm chart and unzip.

Update helm dependencies

Open a terminal in the folder you unzipped.

Add the ingress-nginx repo:

$ helm repo add ingress-nginx

Update helm chart dependencies:

$ helm dependency update

Configure helm

Create a namespace for orchesto

$ kubectl create namespace orchesto

Set kubecontext to new namespace

$ kubectl config set-context --current --namespace=orchesto

Update the ingress section in your values.yaml from:

  enabled: true
  annotations: "nginx" "true" "true" "HTTPS"
    - host: localhost #changeme
        path: /
    - hosts:
      - localhost #changeme

to the following. We are using here

  enabled: true
  annotations: "nginx" "true" "true" "HTTPS"
    - host:
        path: /
    - hosts:

Helm install

Install the helm chart.

$ helm install orchesto . --set license=<license key> --set product=<product key>

Verify that the pods are up and running

$ kubectl get pods

Example output

NAME                                                 READY   STATUS    RESTARTS   AGE
orchesto-analysis-6fb9dc97fc-tbc4w                   1/1     Running   0          9m
orchesto-ingress-nginx-controller-5d4d7655b8-55t2j   1/1     Running   0          9m
orchesto-orchesto-helm-67c8bd6497-5kzlh              1/1     Running   2          9m
orchesto-orchesto-helm-67c8bd6497-7snpk              1/1     Running   2          9m
orchesto-orchesto-helm-67c8bd6497-bc79m              1/1     Running   2          9m
orchesto-postgresql-0                                2/2     Running   0          9m
orchesto-zebcache-helm-0                             1/1     Running   0          9m
orchesto-zebcache-helm-1                             1/1     Running   0          9m
orchesto-zebcache-helm-2                             1/1     Running   0          8m

Configure your dns

You can get your service ip by running

$ kubectl get service

Example output

NAME                                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes                                    ClusterIP        <none>        443/TCP                      44h
orchesto-analysis                             ClusterIP    <none>        9998/TCP                     12s
orchesto-ingress-nginx-controller             LoadBalancer     80:32205/TCP,443:31906/TCP   12s
orchesto-ingress-nginx-controller-admission   ClusterIP      <none>        443/TCP                      12s
orchesto-orchesto-helm                        ClusterIP    <none>        9090/TCP                     12s
orchesto-postgresql                           ClusterIP     <none>        5432/TCP                     12s
orchesto-postgresql-headless                  ClusterIP      None             <none>        5432/TCP                     12s
orchesto-postgresql-metrics                   ClusterIP   <none>        9187/TCP                     12s
orchesto-zebcache-helm                        ClusterIP     <none>        17110/TCP                    12s

We got as out EXTERNAL-IP in our example. Point your hostname to this ip in your dns. Alternatively, you can update your /etc/hosts file:

$ sudo sh -c 'echo "" >>/etc/hosts'

Initializing Orchesto

Once the deployment is done we can go ahead and initialize Orchesto.

Remember to change to whatever domain you are using.

$  curl -k -X PUT -H "Content-Type: application/json" -d '{"token":"token"}'

Optional: Install Kubernetes Dashboard

Kubernetes Dashboard is Web/GUI based User Interface of Kubernetes which is used to see overview of resource running on your cluster like Clusters ,Pods, Deployment, Service, CPU and Memory Usage, etc.

Check for latest version of Kubernetes Dashboard version

Deploy Kubernetes Dashboard

$ export KUBE_DASH_VERSION=v2.0.0
$ kubectl apply -f$KUBE_DASH_VERSION/aio/deploy/recommended.yaml

Run below command to patch Kubernetes dashboard service to use a LoadBalancer

$ kubectl patch service kubernetes-dashboard -n kubernetes-dashboard --patch '{"spec":{"type": "LoadBalancer"}}'

Verify that our dashboard got an ip

$ kubectl get service --namespace kubernetes-dashboard

Example output

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP    <none>         8000/TCP        63s
kubernetes-dashboard        LoadBalancer   443:31591/TCP   64s

Configure Kubernetes Dashboard

By default Kubernetes Dashboard has minimum privileges. To Access with Kubernetes dashboard with full permission, create service account, cluster role binding with cluster admin permission.

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
kind: ClusterRoleBinding
  name: cluster-admin-rolebinding
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

Find Kubernetes Dashboard IP address

$ kubectl get service -n kubernetes-dashboard

Exampel output

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP    <none>         8000/TCP        6m5s
kubernetes-dashboard        LoadBalancer   443:31591/TCP   6m6s

Now you should be able to visit the dashboard on the external ip. In our example we got so we'll visit Use the IP you got.

To login we need a token. You can get it by running

$ kubectl describe secrets dashboard-admin -n kubernetes-dashboard

Copy the token to your clipboard and login on kubernetes-dashboard.

Clean up

K3s is easy to reinstall.

Master node

$ /usr/local/bin/

Agent nodes

$ /usr/local/bin/