Kubernetes Controller

From WilliamsNet Wiki
Revision as of 21:59, 26 December 2019 by DrEdWilliams (talk | contribs) (cut out (and linked to) flannel -- added kube-router as the pod networking plugin)
Jump to navigation Jump to search

The kubernetes controller install is based on the prerequisites in the Kubernetes Cluster Installation page.

Install the kubernetes repo

CentOS 7

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF 

... or just copy it from an already installed kubernetes node ...

Install the pieces of the kubeadm installation on all nodes

yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes

Debian 10

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Install the master

Make sure that /etc/sysconfig/kubelet (or /etc/default/kubelet for Debian) has the following line:

KUBELET_EXTRA_ARGS=--authentication-token-webhook --fail-swap-on=false --feature-gates=DevicePlugins=true --kubelet-cgroups=/systemd/system.slice
systemctl start kubelet
systemctl enable kubelet

Initialize master with parameter to support flannel network (as root)

kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s`

If you want the master to not use the default network interface for inter-node communication, specify the following parameter on the kubeadm init command (per this page):

--apiserver-advertise-address A.B.C.D

Initialize authentication for kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Install pod network

Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the kube-router plugin. Instructions on how to deploy the flannel plugin can be found here.

Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router. This plugin implements both the pod network and the proxy functions -- so the kube-proxy daemonset will be deleted as part of the installation.

Since the default parameter set needs to be adjusted, download the yaml manifest:

wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml

Edit the file to remove the

--run-firewall=true

parameter, then apply it to the cluster:

kubectl apply -f kubeadm-kuberouter-all-features.yaml

Also, as mentioned above, we need to shut down and clean up the kube-proxy pods:

kubectl -n kube-system delete ds kube-proxy
docker run --privileged -v /lib/modules:/lib/modules --net=host k8s.gcr.io/kube-proxy-amd64:v1.15.1 kube-proxy --cleanup

This will implement all routing onto the network interface that supports the default route for the host systems. If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing: Change the default IP address for the node -- at least as far as the kubelet daemon is concerned. The brute force way of doing this is to modify the

/var/lib/kubelet/kubeadm-flags.env, adding one parameter inside the quoted string:

 KUBEADM_EXTRA_ARGS=" ..... --node-ip=10.1.0.99"

Then restart kubelet and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network).  Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...

Enable pods to run on master node if desired

kubectl taint nodes --all node-role.kubernetes.io/master-

Deploy the NVIDIA device controller

If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the nvidia-device-plugin: kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin