Kubernetes Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
m (added installation of nvidia-device-plugin)
mNo edit summary
Line 51: Line 51:
  sudo chown $(id -u):$(id -g) $HOME/.kube/config  
  sudo chown $(id -u):$(id -g) $HOME/.kube/config  


== Install flannel pod network ==
== Install pod network ==
 
Pod networking is provided by an external plugin; this section describes the deployment of the '''kube-router''' plugin.  Instructions on how to deploy the '''flannel''' plugin can be found [[Kubernetes Flannel Network Plugin|here]].


The Flannel pod networking can be installed directly from the repository using this command:
The Flannel pod networking can be installed directly from the repository using this command:

Revision as of 21:39, 26 December 2019

The kubernetes controller install is based on the prerequisites in the Kubernetes Cluster Installation page.

Install the kubernetes repo

CentOS 7

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF 

... or just copy it from an already installed kubernetes node ...

Install the pieces of the kubeadm installation on all nodes

yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes

Debian 10

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Install the master

Make sure that /etc/sysconfig/kubelet (or /etc/default/kubelet for Debian) has the following line:

KUBELET_EXTRA_ARGS=--authentication-token-webhook --fail-swap-on=false --feature-gates=DevicePlugins=true --kubelet-cgroups=/systemd/system.slice
systemctl start kubelet
systemctl enable kubelet

Initialize master with parameter to support flannel network (as root)

kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s`

If you want the master to not use the default network interface for inter-node communication, specify the following parameter on the kubeadm init command (per this page):

--apiserver-advertise-address A.B.C.D

Initialize authentication for kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Install pod network

Pod networking is provided by an external plugin; this section describes the deployment of the kube-router plugin. Instructions on how to deploy the flannel plugin can be found here.

The Flannel pod networking can be installed directly from the repository using this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

On Kubernetes v1.16+, if there are issues (specifically, nodes don't reach the Ready state), try this specific version:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

By default, flannel uses the primary network interface (eth0 on 10.0.0.0/24); but if you want to push the traffic onto the secondary interfaces (such as for the development cluster), download the YAML manifest and add one parameter to the argument list on/about line 190 (in the spec -> template -> spec -> containers [where name == 'kube-flannel-ds-amd64'] -> args). The result should look like this:

...
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
resources:
...

substitute the secondary interface name for 'eth1' if it is different. Then apply the manifest:

kubectl apply -f kube-flannel.yaml

Other configuration options can be passed to flannel per this page

Enable pods to run on master node if desired

kubectl taint nodes --all node-role.kubernetes.io/master-

Deploy the NVIDIA device controller

If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the nvidia-device-plugin:

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml

Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin