You need domain as Gitpod require: *.example.com, *.ws.example.com, example.com
sudo snap install microk8s --classic --channel=1.25/stableTo start node, run command sudo microk8s start
Gitpod also has some requirements for k8s cluster, for more details
We can assign those labels to node. The number of node is up to you, in this guide, I use one microk8s node.
microk8s kubectl label nodes <your-node-name> <key=value>curl https://kots.io/install | bashmicrok8s kubectl kots install gitpodNamespace: gitpod by default.
Enter admin password (remember it!).
Enter your domain (i.e: example.com)
In cert configuration, if you have familiar with CertManager, you can use it.
In this guide, I will use manual certificate generated by certbot
certbot certonly \
--manual --agree-tos --preferred-challenges=dns \
--email=<your-email> \
-d <example.com> -d <*.example.com> -d <*.ws.example.com>Certificate file that certbot gnereated need privilege to access, but you can use sudo cat and copy content to another file. Keep in mind that they need to be SAFE.
Back to admin console, untick both Use cert-manager and Use a self-signed TLS certificate, it will show 3 input for upload your own certifiate.
Note: Certificate file is fullchain.pem, Private key is privkey.pem
In Advance options, tick Enable advanced options. In Gitpod config patch (YAML file) input. Upload file with content:
apiVersion: v1
workspace:
runtime:
containerdRuntimeDir: /var/snap/microk8s/common/var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
containerdSocket: /var/snap/microk8s/common/run/containerd.sockYou can save for later usage.
Both field containerdRuntimeDir and containerdSocket is configured for k3s cluster by default. So we need override it when using microk8s cluster.
You can find by you own by command: sudo find / | grep containerd.sock and sudo find / | grep k8s.io
Maybe you need try others path when it not worked.
In Components, configuration for proxy service type, keep it later, we have some solution depends on your case. By default, it is LoadBalanacer.
And finally, Save config.
It will take time for preflight check, pull images, etc...
When preflight done, check if all kubernetes resources are up by command microk8s kubectl get all -n gitpod
Make sure all resources are up.
Please check if public ip is detected by vps by running command ifconfig | grep <your-public-ip>
If not shown anything, please follow solution 2.
Else, follow next step.
Remember the default service type? Yes, it is LoadBlancer. When you select LoadBalancer service type, Kubernetes will try to allocate public ip to assign for this load balancer. You can check by running microk8s kubectl get services -n gitpod, notice service proxy with type LoadBalancer, external ip column is in pending state, and pending forever if you do nothing :).
Microk8s has a addon named metallb for handle this problem.
You can enable this addon by running microk8s enable metallb. It will ask you to specify ip range, you can write <your-public-ip>-<your-public-ip>. This range only allow one ip.
Maybe, this action fails sometimes, please retry by microk8s disbale metallb and microk8s enable metallb
When it done, re-run microk8s kubectl get services -n gitpod. If that external ip now is <your-public-ip>, it correct.
FYI, I have a VPS and public ip. When I run a nginx server bind port 80 in this vps. I can use public ip to access this nginx. But in vps, run ifconfig, this public ip isn't listed. So the magic is the public ip is owned by another machine, and this machine will route network traffic by using NAT. NAT will re-transalate the public ip to local ip of my vps.
And the solution is, you need setup a nginx and forward tcp stream to ClusterIP of proxy service.
Setup 3 dns to point to vps.
I'll use docker-compose to spin up a nginx.
Here is docker-compose.yaml
version: "3.7"
services:
nginx:
image: nginx:alpine
network_mode: host
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
restart: alwaysCreate nginx.conf in same folder as docker-compose.yaml
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
server {
listen 80;
proxy_pass <cluster-ip-service-proxy>:80;
}
server {
listen 443;
proxy_pass <cluster-ip-service-proxy>:443;
}
}
Remember admin console? Open it (For you guys that forgot or closed the session, you can re-open admin console by running command microk8s kubectl kots admin-console -n gitpod)
In config tab, scroll down to Proxy service type, select ClusterIP, and save config. It will take time to preflight new version, and when it done, remember to deploy new version.
You can check by running microk8s kubectl get services -n gitpod, notice service proxy with type ClusterIP, copy its ClusterIP and it is <cluster-ip-service-proxy> that you need.
Replace <cluster-ip-service-proxy> and re-run docker-compose, or you can exec into container shell and run nginx -s reload.
3. If you have 2 vps, one low-resources vps with public ip, another high-resources but no public ip.
FYI, I have a powerful machine, but no public ip attached to it by default. Of course, there is some way to use public ip of router. But to keep it simple, I don't prefer it. And another is weak one, AWS EC2 free with Elastic IP attached. The problem is that I can only run microk8s node in powerful machine.
And the solution is, you need setup a vpn for powerful machine and weak one. There are many way to do, openvpn, wireguard, etc...
Run an nginx in weak one and forward tcp stream to the powerful one. It have some problem about the latency but it is still working.
Setup 3 dns to point to weak one.
In weak one, I will use docker-compose to spin up a nginx.
Here is docker-compose.yaml
version: "3.7"
services:
nginx:
image: nginx:alpine
network_mode: host
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
restart: alwaysNote: I'm using network_mode: host, so it will bind container to host (docker host), it means that all port opened in container will be directly opened in host, and ip that host can reach, the container can reach too.
Create nginx.conf in same folder as docker-compose.yaml
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
server {
listen 80;
proxy_pass <powerful-one-vpn-ip>:80;
}
server {
listen 443;
proxy_pass <powerful-one-vpn-ip>:443;
}
}
Weak one is done here. Let's continue on powerful one.
Remember the default service type? Yes, it is LoadBlancer. When you select LoadBalancer service type, Kubernetes will try to allocate public ip to assign for this load balancer. You can check by running microk8s kubectl get services -n gitpod, notice service proxy with type LoadBalancer, external ip column is in pending state, and pending forever if you do nothing :).
Microk8s has a addon named metallb for handle this problem.
You can enable this addon by running microk8s enable metallb. It will ask you to specify ip range, you can write <powerful-one-vpn-ip>-<powerful-one-vpn-ip>. This range only allow one ip.
Maybe, this action fails sometimes, please retry by microk8s disbale metallb and microk8s enable metallb
When it done, re-run microk8s kubectl get services -n gitpod. If that external ip now is <powerful-one-vpn-ip>, it correct.
Using browser to access your gitpod domain, see your successful installation.
There is a pod that failed to run. And microk8s kubectl describe <pod>. There is error on /var/snap/microk8s/common/var/lib/kubelet/seccomp folder.
The root cause is seccomp folder not in this folder. I'm not sure about that but generally it is. I checked that seccomp folder is located in /var/lib/kubelet/seccomp. We can create symbolic link folder from /var/lib/kubelet/seccomp to /var/snap/microk8s/common/var/lib/kubelet/seccomp by running ln -s /var/lib/kubelet/seccomp /var/snap/microk8s/common/var/lib/kubelet/seccomp