Kubernetes Controller

From WilliamsNet Wiki
Revision as of 05:45, 11 July 2020 by DrEdWilliams (talk | contribs)
Jump to navigation Jump to search

The kubernetes controller install is based on the prerequisites in the Kubernetes Cluster Installation page.

Install the kubernetes repo

CentOS 7

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF 

... or just copy it from an already installed kubernetes node ...

Install the pieces of the kubeadm installation on all nodes

yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes

Debian 10

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Install the master

Make sure that /etc/sysconfig/kubelet (or /etc/default/kubelet for Debian) has the following line:

KUBELET_EXTRA_ARGS=--authentication-token-webhook --fail-swap-on=false --feature-gates=DevicePlugins=true --kubelet-cgroups=/systemd/system.slice
systemctl start kubelet
systemctl enable kubelet

See below for one more parameter that could go on the end of this line now to save some time later.

Initialize the masternode with the CIDR parameter to support pod networking (as root):

kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s` --skip-phases=addon/kube-proxy

If you want the master to not use the default network interface for API communication, specify the following parameter on the kubeadm init command (per this page):

--apiserver-advertise-address A.B.C.D

Initialize authentication for kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Install pod network

Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the kube-router plugin. Instructions on how to deploy the flannel plugin can be found here.

Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md. This plugin implements both the pod network and the proxy functions -- so the kube-proxy daemonset is no longer needed.

Grab and install the manifest straight from their GitHub site:

KUBECONFIG=/$HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

This will implement all routing onto the network interface that supports the default route for the host systems. If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing: Change the default IP address for the node -- at least as far as the kubelet daemon is concerned. The brute force way of doing this is to modify the /etc/sysconfig/kubelet file, adding one parameter to the end of the line:

KUBELET_EXTRA_ARGS= ..... --node-ip=10.1.0.99

This should be done before the kubeadm init command above -- in which case the node's InternalIP address is already set. If you do it after the kubeadm init, then restart kubelet now and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network). Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...

Enable pods to run on master node if desired

kubectl taint nodes --all node-role.kubernetes.io/master-

Deploy the NVIDIA device controller

If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the nvidia-device-plugin:

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml

Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin