Kubernetes Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
mNo edit summary
 
(18 intermediate revisions by 3 users not shown)
Line 22: Line 22:
=== Debian 10 ===
=== Debian 10 ===
  apt-get update && apt-get install -y apt-transport-https curl
  apt-get update && apt-get install -y apt-transport-https curl
  curl -s <nowiki>https://packages.cloud.google.com/apt/doc/apt-key.gpg</nowiki> | apt-key add -
  sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <nowiki>https://packages.cloud.google.com/apt/doc/apt-key.gpg</nowiki>
  cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
  echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <nowiki>https://apt.kubernetes.io/</nowiki> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb <nowiki>https://apt.kubernetes.io/</nowiki> kubernetes-xenial main
 
EOF
  apt-get update
  apt-get update
  apt-get install -y kubelet kubeadm kubectl
  apt-get install -y kubelet kubeadm kubectl
Line 38: Line 37:
  systemctl enable kubelet
  systemctl enable kubelet


Initialize master with parameter to support flannel network (as root)
See below for one more parameter that could go on the end of this line now to save some time later. 
 
Initialize the masternode with the CIDR parameter to support pod networking (as root):
 
kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s` --skip-phases=addon/kube-proxy


kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0 --ignore-preflight-errors Swap --node-name `hostname -s`
If you want the master to not use the default network interface for API communication, specify the following parameter on the kubeadm init command (per [https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ this page]):
  --apiserver-advertise-address A.B.C.D


== Initialize authentication for kubectl ==
== Initialize authentication for kubectl ==
Line 48: Line 52:
  sudo chown $(id -u):$(id -g) $HOME/.kube/config  
  sudo chown $(id -u):$(id -g) $HOME/.kube/config  


== Install flannel pod network ==
== Install pod network ==
 
Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the '''kube-router''' plugin.  Instructions on how to deploy the '''flannel''' plugin can be found [[Kubernetes Flannel Network Plugin|here]].


  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md. This plugin implements both the pod network and the proxy functions -- so the ''kube-proxy'' daemonset is no longer needed. 
 
Grab the manifest straight from their GitHub site:
 
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
 
Edit the manifest to remove the '--run-firewall=true' option, set the '--run-service-proxy' option to 'ture', and add another option that says:
 
--kubeconfig=/var/lib/kube-router/kubeconfig/config
 
Then you will need to create that directory and populate it:
 
sudo mkdir -p /var/lib/kube-router/kubeconfig
sudo cp /etc/kubernetes/admin.conf /var/lib/kube-router/kubeconfig/config
 
Not sure yet if this needs to be done for all nodes, or just the master ...
 
This will implement all routing onto the network interface that supports the default route for the host systems.  If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing:  Change the default IP address for the node -- at least as far as the kubelet daemon is concerned.  The brute force way of doing this is to modify the '''/etc/sysconfig/kubelet''' file, adding one parameter to the end of the line:
 
KUBELET_EXTRA_ARGS= ..... --node-ip=10.1.0.99
 
This should be done before the ''kubeadm init'' command above -- in which case the node's InternalIP address is already set.  If you do it after the ''kubeadm init'', then restart kubelet now and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network).  Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...


== Enable pods to run on master node if desired ==
== Enable pods to run on master node if desired ==


  kubectl taint nodes --all node-role.kubernetes.io/master-
  kubectl taint nodes --all node-role.kubernetes.io/master-
== Install kubectl plugins ==
[[Kubectl Plugins and Krew]] -- just do it ... you will thank me later
== Deploy the NVIDIA device controller ==
If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the '''''nvidia-device-plugin''''':
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml
Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin
If using '''containerd''' as the runtime, see here - https://josephb.org/blog/containerd-nvidia/

Latest revision as of 22:06, 10 April 2021

The kubernetes controller install is based on the prerequisites in the Kubernetes Cluster Installation page.

Install the kubernetes repo[edit]

CentOS 7[edit]

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF 

... or just copy it from an already installed kubernetes node ...

Install the pieces of the kubeadm installation on all nodes

yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes

Debian 10[edit]

apt-get update && apt-get install -y apt-transport-https curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Install the master[edit]

Make sure that /etc/sysconfig/kubelet (or /etc/default/kubelet for Debian) has the following line:

KUBELET_EXTRA_ARGS=--authentication-token-webhook --fail-swap-on=false --feature-gates=DevicePlugins=true --kubelet-cgroups=/systemd/system.slice
systemctl start kubelet
systemctl enable kubelet

See below for one more parameter that could go on the end of this line now to save some time later.

Initialize the masternode with the CIDR parameter to support pod networking (as root):

kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s` --skip-phases=addon/kube-proxy

If you want the master to not use the default network interface for API communication, specify the following parameter on the kubeadm init command (per this page):

--apiserver-advertise-address A.B.C.D

Initialize authentication for kubectl[edit]

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Install pod network[edit]

Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the kube-router plugin. Instructions on how to deploy the flannel plugin can be found here.

Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md. This plugin implements both the pod network and the proxy functions -- so the kube-proxy daemonset is no longer needed.

Grab the manifest straight from their GitHub site:

wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

Edit the manifest to remove the '--run-firewall=true' option, set the '--run-service-proxy' option to 'ture', and add another option that says:

--kubeconfig=/var/lib/kube-router/kubeconfig/config

Then you will need to create that directory and populate it:

sudo mkdir -p /var/lib/kube-router/kubeconfig
sudo cp /etc/kubernetes/admin.conf /var/lib/kube-router/kubeconfig/config

Not sure yet if this needs to be done for all nodes, or just the master ...

This will implement all routing onto the network interface that supports the default route for the host systems. If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing: Change the default IP address for the node -- at least as far as the kubelet daemon is concerned. The brute force way of doing this is to modify the /etc/sysconfig/kubelet file, adding one parameter to the end of the line:

KUBELET_EXTRA_ARGS= ..... --node-ip=10.1.0.99

This should be done before the kubeadm init command above -- in which case the node's InternalIP address is already set. If you do it after the kubeadm init, then restart kubelet now and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network). Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...

Enable pods to run on master node if desired[edit]

kubectl taint nodes --all node-role.kubernetes.io/master-

Install kubectl plugins[edit]

Kubectl Plugins and Krew -- just do it ... you will thank me later

Deploy the NVIDIA device controller[edit]

If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the nvidia-device-plugin:

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml

Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin

If using containerd as the runtime, see here - https://josephb.org/blog/containerd-nvidia/