K3s - Kubernetes Simplified: Difference between revisions
DrEdWilliams (talk | contribs) mNo edit summary |
DrEdWilliams (talk | contribs) |
||
| Line 82: | Line 82: | ||
* specify the IP of the first mater created | * specify the IP of the first mater created | ||
Repeat this for all the masters (minimum 3) providing the same server IP address of the first master. At this point, you should have an operational cluster. k3s enables masters to be used for normal pods by default, so this is the minimum operational cluster. Next, install the agent/worker nodes: | Repeat this for all the masters (minimum 3) providing the same server IP address of the first master. Now that you have all the masters online, reset the kubeconfig file (~/.kube/config) to use the DNS RR name instead of the IP of the first master: | ||
sed -ie s/<server-ip-1>/<rr-dns-name>/g $HOME/.kube/config | |||
At this point, you should have an operational cluster. k3s enables masters to be used for normal pods by default, so this is the minimum operational cluster. Next, install the agent/worker nodes: | |||
k3sup join --ip <agent-node-x> --k3s-channel latest --server-host <rr-dns-name> | k3sup join --ip <agent-node-x> --k3s-channel latest --server-host <rr-dns-name> | ||
Revision as of 22:32, 27 December 2020
Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB.
Great for:
- Edge
- IoT
- CI
- Development
- ARM
- Embedding k8s
- Situations where a PhD in k8s clusterology is infeasible
Extracted from https://github.com/k3s-io/k3s ... YMMV
What makes k3s special is that they have stripped out all the bloat-ware from upstream kubernetes that is not needed for small and bare-metal deployments. None of the so-called cloud features are present, and most of the CSI storage drivers are also gone.
As such, it is a simpler replacement for the kubeadm command, and it does a bit more for you:
- installs traefix as the default ingress controller -- it works ... especially for a test/dev cluster, but for production I'll use Contour/Envoy
- installs flannel as the CNI networking layer ... I had problems with flannel before, let's see if this works any better
- installs metrics-server to collect node/pod performance data
- provides a 'local path provisioner' ... not sure what this does ... yet ...
Basic Cluster Installation
They weren't kidding ... this is all it takes to create a simple cluster:
curl -sfL https://get.k3s.io | sh -
A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml and the service is automatically started or restarted. The install script will install K3s and additional utilities, such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh, for example:
sudo kubectl get nodes
K3S_TOKEN is created at /var/lib/rancher/k3s/server/node-token on your server. To install on worker nodes we should pass K3S_URL along with K3S_TOKEN or K3S_CLUSTER_SECRET environment variables, for example:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
HA Cluster Installation
One thing that attracted me to this distribution (as it is a distribution, not a fork of kubernetes) is that it provides a very simple way to set up a HA cluster control plane. There are command options in the deployment that provide for the configuration of a common etcd server to facilitate a HA control plane. Even with that ... it can get a bit cumbersome. Enter k3sup.
k3sup
k3sup is a wrapper for the k3s installer that enables:
- installation of a complete cluster from a single login session using SSH
- combining multiple k3s parameters into single options to k3sup for simplicity (i.e. --cluster)
=== External LoadBalancer (or DNS Round Robin) The one remaining item needed for a fully HA control plane is an external load balancer to provide a single IP address for accessing the control API from inside or outside the cluster. Lacking something like a BigIron F5 load balancer or something equivalent, the poor-man's approach is to use a round robin in the DNS. Problem is, the pfSense firewall doesn't do round robins by default, so a little arm twisting is needed. Put this in the 'custom options' in the DNS Resolver settings page:
server: rrset-roundrobin: yes local-zone: "kube-test.williams.localnet" transparent local-data: "kube-test.williams.localnet A 10.0.0.70" local-data: "kube-test.williams.localnet A 10.0.0.71" local-data: "kube-test.williams.localnet A 10.0.0.72"
Change the name and IP addresses as needed, and then load the changes. This approach somewhat complicates the deployment commands below, so pay attention ...
Get k3sup
Not surprisingly, there are multiple ways to do this. The author wants you to download it from a marketplace app (arkade):
curl -sLS https://dl.get-arkade.dev | sh # Install the binary using the command given as output, such as: sudo cp arkade-darwin /usr/local/bin/arkade # Or on Linux: sudo cp arkade /usr/local/bin/arkade arkade get k3sup
... or you can just download it directly ...
curl -sLS https://get.k3sup.dev | sh sudo install k3sup /usr/local/bin/
You just need to put it somewhere in your path (or use a full path reference).
Install the master nodes
Install the first master node with these commands. This will put it on the node that the command is issued from ... if that is not the desire, remove the '--local' parameter
mkdir $HOME/.kube k3sup install --cluster --local --ip <server-ip-1> --k3s-channel latest --k3s-extra-args "--disable servicelb" --local-path $HOME/.kube/config --tls-san <rr-dns-name>
Explanations:
- --cluster sets this up for a HA control plane
- --local says to do it on this system, instead of using SSH to the node -- but you still need to provide the IP address, otherwise it will put 127.0.0.1 as the API server address
- get the latest version available from k3s
- we want to disable the simplified k3s loadbalancer ... it really is a crowbar solution and doesn't provide IP request capability
- the --local-path tells k3sup where to put the kubeconfig files ... this is the standard place, so we will put it there. The directory has to exist -- hence the 'mkdir' command ...
- by providing the name of the DNS round robin, it should be able to pass muster on TLS connections ...
Subsequent master nodes can be installed next:
k3sup join --server --ip <server-ip-2> --k3s-channel latest --server-ip <server-ip-1>
Explanations:
- join as another server, not an agent/worker
- provide the IP address of the node to be joined
- specify the IP of the first mater created
Repeat this for all the masters (minimum 3) providing the same server IP address of the first master. Now that you have all the masters online, reset the kubeconfig file (~/.kube/config) to use the DNS RR name instead of the IP of the first master:
sed -ie s/<server-ip-1>/<rr-dns-name>/g $HOME/.kube/config
At this point, you should have an operational cluster. k3s enables masters to be used for normal pods by default, so this is the minimum operational cluster. Next, install the agent/worker nodes:
k3sup join --ip <agent-node-x> --k3s-channel latest --server-host <rr-dns-name>
Explanations:
- Joining as an agent, so no '--server' this time
- provide each agent/worker IP in turn
- instead of the '--server-ip' parameter, we are using the '--server-host' parameter so we can use the DNS RR to ensure that the nodes always can access a master
Finishing The Job
These instructions will create a simple or HA kubernetes cluster. The HA cluster can survive a single master failure, but two will take down the cluster.
At this point you jump back into the rest of the instructions for installing a Kubernetes Cluster, remembering that the network layer and metrics server have already been installed.