Editing
K3s - Kubernetes Simplified
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== HA Cluster Installation == One thing that attracted me to this distribution (as it is a distribution, not a fork of kubernetes) is that it provides a very simple way to set up a HA cluster control plane. There are command options in the deployment that provide for the configuration of a common etcd server to facilitate a HA control plane. Even with that ... it can get a bit cumbersome. Enter [https://github.com/alexellis/k3sup k3sup]. === k3sup === k3sup is a wrapper for the k3s installer that enables: * installation of a complete cluster from a single login session using SSH * combining multiple k3s parameters into single options to k3sup for simplicity (i.e. '''--cluster''') === External LoadBalancer/Reverse Proxy === The one remaining item needed for a fully HA control plane is an external load balancer or reverse proxy to provide a single IP address for accessing the control API from inside or outside the cluster. Lacking something like a BigIron F5 load balancer or something equivalent, the poor-man's approach is to use a separate reverse proxy. This can be done with HAproxy or NGINX. HAproxy can support a HA reverse proxy if you need that level of redundancy, or a simple NGINX configuration like this will work: <pre>stream { upstream k3s-servers { server 10.0.0.10:6443; server 10.0.0.74:6443; server 10.0.0.64:6443; } server { listen 6443; proxy_pass k3s-servers; } } </pre> This needs to be at the top level in the NGINX configuration -- either in the /etc/nginx/nginx.conf itself, or included at the top level. This process is more completely described in the K3S documentation: https://docs.k3s.io/datastore/cluster-loadbalancer If setting up a reverse proxy is not an option, you can set up a round robin in the DNS. Problem is, the pfSense firewall doesn't do round robins by default, so a little arm twisting is needed. Put this in the 'custom options' in the DNS Resolver settings page: server: rrset-roundrobin: yes local-zone: "kube-test.williams.localnet" transparent local-data: "kube-test.williams.localnet A 10.0.0.10" local-data: "kube-test.williams.localnet A 10.0.0.64" local-data: "kube-test.williams.localnet A 10.0.0.74" Change the name and IP addresses as needed, and then load the changes. Using either approach somewhat complicates the deployment commands below, so pay attention ... === Get k3sup === Not surprisingly, there are multiple ways to do this. The author wants you to download it from a marketplace app (arkade): curl -sLS https://dl.get-arkade.dev | sh # Install the binary using the command given as output, such as: sudo cp arkade-darwin /usr/local/bin/arkade # Or on Linux: sudo cp arkade /usr/local/bin/arkade arkade get k3sup ... or you can just download it directly ... curl -sLS https://get.k3sup.dev | sh sudo install k3sup /usr/local/bin/ You just need to put it somewhere in your path (or use a full path reference). === Install the master nodes === Install the first master node with these commands. This will put it on the node that the command is issued from ... if that is not the desire, remove the '--local' parameter mkdir $HOME/.kube export EXTRA_ARGS="--disable servicelb --disable traefik --disable local-storage --write-kubeconfig-mode=644 --flannel-backend host-gw" \ k3sup install --cluster --local --ip <server-ip-1> --k3s-channel stable --k3s-extra-args '$EXTRA_ARGS' \ --local-path $HOME/.kube/config --tls-san <rr-dns-name> Explanations: * --cluster sets this up for a HA control plane * --local says to do it on this system, instead of using SSH to the node -- '''but you still need to provide the IP address''', otherwise it will put 127.0.0.1 as the API server address * get the last stable version available from k3s (use 'latest' to get bleeding edge updates) * we want to disable the simplified k3s loadbalancer ... it really is a crowbar solution and doesn't provide IP request capability * disable the default ingress controller, as we really want something better * disable the local storage driver (it's a pain if you're installing another storage driver) * make the k3s.yaml file readable by the pods ... really should be by default * (temporary workaround) force the flannel backend to use 'host-gw' mode since 'vxlan' mode seems to be broken with Debian 11 * the --local-path tells k3sup where to put the kubeconfig files ... this is the standard place, so we will put it there. The directory has to exist -- hence the 'mkdir' command ... * by providing the name of the DNS round robin, it should be able to pass muster on TLS connections ... Subsequent master nodes can be installed next: k3sup join --server --host <new master server> --k3s-channel stable --server-host <original master server> --user ewilliam --k3s-extra-args "$EXTRA_ARGS" Explanations: * join as another server, not an agent/worker * provide the hostname of the new master node to be joined * specify the hostname of the first master created * use the same EXTRA_ARGS provided to the first server Repeat this for all the masters (minimum 3) providing the same server IP address of the first master. Note that if you need to add/remove more parameters to the k3s service, you need to do it on all server nodes -- just edit the k3s.service file and restart one server at a time. Now that you have all the masters online, reset the kubeconfig file (~/.kube/config) to use the common name through the reverse proxy or DNS Round Robin instead of the IP of the first master: sed -i -e s/<server-ip-1>/<common-name>/g $HOME/.kube/config === Install Worker Nodes === At this point, you should have an operational cluster. k3s enables masters to be used for normal pods by default, so this is the minimum operational cluster. Next, install the agent/worker nodes: k3sup join --ip <agent-node-x> --k3s-channel stable --server-host <common-name> Explanations: * Joining as an agent, so no '--server' this time * provide each agent/worker IP in turn * instead of the '--server-ip' parameter, we are using the '--server-host' parameter so we can use the DNS RR to ensure that the nodes always can access a master
Summary:
Please note that all contributions to WilliamsNet Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
WilliamsNet Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Navigation
Commons
Architecture
How-To
Systems
Hardware
SysAdmin
Kubernetes
OpenSearch
Special
Pages to create
All pages
Recent changes
Random page
Help about MediaWiki
Formatting Help
Tools
What links here
Related changes
Special pages
Page information