Install Kubernetes k3s

In a few steps you will have a private Kubernetes k3s up and running.

Requirements

License and product key

Get your trial Orchesto License

Activate a free of charge trial license for the product of you choice at https://portal.orchesto.io/user/login

Required hostname for Orchesto

You will need a hostname for Orchesto.

In our example we are going to use orchesto.example.com

Required VMs

3 nodes

Our example will run the following configuration:

VM Node Kubernetes Role Example IP addresses
Node 1 Master 192.168.1.100
Node 2 Agent 1 192.168.1.101
Node 3 Agent 2 192.168.1.102

Required IPs

The following network requirements needs to be satisfied before starting

  • IP adresses for the 3 nodes
  • Range of 10 free IP addresses for your kubernetes services

Our example has the following configuration

VM Node Example IP addresses
Node 1 192.168.1.100
Node 2 192.168.1.101
Node 3 192.168.1.102

IP Range for Kubernetes services, our example will use:

192.168.1.240-192.168.1.250

Required client software:

Preparation

Prepare VMs (amd64) for k3s

  • Install Ubuntu 20.04 or newer on all nodes
    • Get IP addresses for your VMs
    • Get hostnames for your VMs
  • Copy over your ssh key with: ssh-copy-id < User >@< IP address to VM > to all nodes

Run the following on all nodes for passwordless access to root:

$ ssh < User >@< IP address to VM >
$ sudo su
# cp .ssh/authorized_keys ~/.ssh/
# exit
$ exit

Prepare Raspberry PI 4 (arm64) for k3s

  • Download etcher.io to flash your sd cards
  • Flash all SD cards using Ubuntu 20.04 or later
  • Copy over your ssh key with: ssh-copy-id < User >@< IP address to RPI > to all Raspberry PI's ( default USER:ubuntu Password: ubuntu)
  • Optional: File might not exist. Run ssh < User >@< IP address to VM > and use favorite editor to change /boot/firmware/cmdline.txt to append cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 and save on all Raspberry PI's
  • Download and install the following package on all Raspberry PI's wget .deb .....

Deploy k3s on all nodes

Now we are going to install k3s.

Installation of the master node

Information of user to login to the nodes. We are using root in the following example.

On your computer

$ export NODEUSER="root"

IP address of node 1. We are using 192.168.0.100 in our example.

$ export MASTER=192.168.0.100

Note that k3sup will default to using ~/.ssh/id_rsa as your key. If you are using some other key you need to update the --ssh-key flag.

$ k3sup install \
  --ip $MASTER \
  --user $NODEUSER \
  --context zebware_k3s \
  --ssh-key "~/.ssh/id_rsa" \
  --k3s-extra-args '--with-node-id --no-deploy servicelb --no-deploy traefik' 

Configure kubectl on your computer with the correct cluster. From the directory you executed the above command, run:

$ export KUBECONFIG=$(pwd)/kubeconfig

Verify that your kubectl configuration is correct

$ kubectl get nodes

Example output

NAME                                   STATUS   ROLES    AGE   VERSION
test-lab-14.int.zebware.com-a3c7ecd6   Ready    master   18s   v1.19.4+k3s1

Now we have our kubernetes master up and running on node 1 and kubectl is connected to the correct cluster.

Installation of agent nodes

Now we are going to use node 2 and node 3 as agents in our kubernetes cluster.

VM Node Kubernetes Role Example IP addresses
Node 1 Master 192.168.1.100
Node 2 Agent 1 192.168.1.101
Node 3 Agent 2 192.168.1.102

We are going to use Node 2 and Node 3 as agents in the kubernetes cluster. We are using 192.168.0.101 and 192.168.1.102 for the agents in our example.

$ export AGENT_1=192.168.0.101
$ export AGENT_2=192.168.0.102
$ k3sup join --ip $AGENT_1 --server-ip $MASTER --user $NODEUSER
$ k3sup join --ip $AGENT_2 --server-ip $MASTER --user $NODEUSER

Now we can verify that the agent nodes are running

$ kubectl get nodes

Example output

NAME                                   STATUS   ROLES    AGE     VERSION
test-lab-14.int.zebware.com-a3c7ecd6   Ready    master   2m31s   v1.19.4+k3s1
test-lab-15.int.zebware.com            Ready    <none>   60s     v1.19.4+k3s1
test-lab-16.int.zebware.com            Ready    <none>   37s     v1.19.4+k3s1

Install a Loadbalancer in the cluster

Continue on by deploying metallb in your cluster. Metallb is a loadbalancer created for metal. More information is available here https://metallb.universe.tf

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Verify that metallb is running. Wait until all pods are in a running state. This might take a while.

$ kubectl get pods -n metallb-system

Example output

NAME                          READY   STATUS    RESTARTS   AGE
speaker-6xq75                 1/1     Running   0          2m
speaker-jvl5x                 1/1     Running   0          2m
speaker-5hfrj                 1/1     Running   0          2m
controller-65db86ddc6-dstls   1/1     Running   0          2m

Layer 2 Configuration

Configure your IP pool to be used for Layer 2. If you are unsure about the address range, ask your network administrator. We are using the range 192.168.1.240 to 192.168.1.250 in our example.

$ export IP_POOL=192.168.1.240-192.168.1.250

Add a metallb configuration with your IP_POOL

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - $IP_POOL
EOF

Verify the ConfigMap

$ kubectl get configmap --namespace metallb-system

Example output

NAME     DATA   AGE
config   1      39s

Install Cert-manager

Cert-Manager is installed to automate certificate management for applications installed into the cluster

You can read more about cert-manager on their website Cert-Manager

Check for latest version of cert-manager versions. We are using v1.1.0 in this guide which has been verified to work.

export CERT_M_VERSION=v1.1.0
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --version $CERT_M_VERSION --set installCRDs=true

Verify cert-manager

$ kubectl get pods --namespace cert-manager

Example output

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6d87886d5c-8x8lg              1/1     Running   0          34s
cert-manager-cainjector-55db655cd8-9zqsx   1/1     Running   0          34s
cert-manager-webhook-6846f844ff-k2p4l      1/1     Running   0          34s

Install Orchesto with our Helm chart

Preparation is now completed and you should have the following:

  1. A licenses key and a product key
  2. K3s installed and running
  3. kubectl configured to your cluster
  4. helm available on your commandline

Download our helm chart and unzip.

Update helm dependencies

Open a terminal in the folder you unzipped.

Add the ingress-nginx repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Update helm chart dependencies:

$ helm dependency update

Configure helm

Create a namespace for orchesto

$ kubectl create namespace orchesto

Set kubecontext to new namespace

$ kubectl config set-context --current --namespace=orchesto

Update the ingress section in your values.yaml from:

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  hosts:
    - host: localhost #changeme
      paths:
        path: /
  tls:
    - hosts:
      - localhost #changeme

to the following. We are using orchesto.example.com here

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  hosts:
    - host: orchesto.example.com
      paths:
        path: /
  tls:
    - hosts:
      - orchesto.example.com

Helm install

Install the helm chart.

$ helm install orchesto . --set license=<license key> --set product=<product key>

Verify that the pods are up and running

$ kubectl get pods

Example output

NAME                                                 READY   STATUS    RESTARTS   AGE
orchesto-analysis-6fb9dc97fc-tbc4w                   1/1     Running   0          9m
orchesto-ingress-nginx-controller-5d4d7655b8-55t2j   1/1     Running   0          9m
orchesto-orchesto-helm-67c8bd6497-5kzlh              1/1     Running   2          9m
orchesto-orchesto-helm-67c8bd6497-7snpk              1/1     Running   2          9m
orchesto-orchesto-helm-67c8bd6497-bc79m              1/1     Running   2          9m
orchesto-postgresql-0                                2/2     Running   0          9m
orchesto-zebcache-helm-0                             1/1     Running   0          9m
orchesto-zebcache-helm-1                             1/1     Running   0          9m
orchesto-zebcache-helm-2                             1/1     Running   0          8m

Configure your dns

You can get your service ip by running

$ kubectl get service

Example output

NAME                                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes                                    ClusterIP      10.96.0.1        <none>        443/TCP                      44h
orchesto-analysis                             ClusterIP      10.111.30.188    <none>        9998/TCP                     12s
orchesto-ingress-nginx-controller             LoadBalancer   10.111.78.167    192.168.1.242     80:32205/TCP,443:31906/TCP   12s
orchesto-ingress-nginx-controller-admission   ClusterIP      10.98.43.47      <none>        443/TCP                      12s
orchesto-orchesto-helm                        ClusterIP      10.110.57.118    <none>        9090/TCP                     12s
orchesto-postgresql                           ClusterIP      10.100.13.97     <none>        5432/TCP                     12s
orchesto-postgresql-headless                  ClusterIP      None             <none>        5432/TCP                     12s
orchesto-postgresql-metrics                   ClusterIP      10.106.242.116   <none>        9187/TCP                     12s
orchesto-zebcache-helm                        ClusterIP      10.99.140.28     <none>        17110/TCP                    12s

We got 192.168.1.242 as out EXTERNAL-IP in our example. Point your hostname to this ip in your dns. Alternatively, you can update your /etc/hosts file:

$ sudo sh -c 'echo "192.168.1.242 orchesto.example.com" >>/etc/hosts'

Initializing Orchesto

Once the deployment is done we can go ahead and initialize Orchesto.

Remember to change orchesto.example.com to whatever domain you are using.

$  curl -k -X PUT -H "Content-Type: application/json" -d '{"token":"token"}' https://orchesto.example.com/orchesto/api/v1/admin/init

Optional: Install Kubernetes Dashboard

Kubernetes Dashboard is Web/GUI based User Interface of Kubernetes which is used to see overview of resource running on your cluster like Clusters ,Pods, Deployment, Service, CPU and Memory Usage, etc.

Check for latest version of Kubernetes Dashboard version

Deploy Kubernetes Dashboard

$ export KUBE_DASH_VERSION=v2.0.0
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/$KUBE_DASH_VERSION/aio/deploy/recommended.yaml

Run below command to patch Kubernetes dashboard service to use a LoadBalancer

$ kubectl patch service kubernetes-dashboard -n kubernetes-dashboard --patch '{"spec":{"type": "LoadBalancer"}}'

Verify that our dashboard got an ip

$ kubectl get service --namespace kubernetes-dashboard

Example output

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP      10.43.224.95    <none>         8000/TCP        63s
kubernetes-dashboard        LoadBalancer   10.43.255.119   192.168.1.241   443:31591/TCP   64s

Configure Kubernetes Dashboard

By default Kubernetes Dashboard has minimum privileges. To Access with Kubernetes dashboard with full permission, create service account, cluster role binding with cluster admin permission.

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

Find Kubernetes Dashboard IP address

$ kubectl get service -n kubernetes-dashboard

Exampel output

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP      10.43.224.95    <none>         8000/TCP        6m5s
kubernetes-dashboard        LoadBalancer   10.43.255.119   192.168.1.241   443:31591/TCP   6m6s

Now you should be able to visit the dashboard on the external ip. In our example we got 192.168.1.241 so we'll visit https://192.168.1.241. Use the IP you got.

To login we need a token. You can get it by running

$ kubectl describe secrets dashboard-admin -n kubernetes-dashboard

Copy the token to your clipboard and login on kubernetes-dashboard.

Clean up

K3s is easy to reinstall.

Master node

$ /usr/local/bin/k3s-uninstall.sh

Agent nodes

$ /usr/local/bin/k3s-agent-uninstall.sh