Kubernetes Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
mNo edit summary
(cut out (and linked to) flannel -- added kube-router as the pod networking plugin)
Line 53: Line 53:
== Install pod network ==
== Install pod network ==


Pod networking is provided by an external plugin; this section describes the deployment of the '''kube-router''' plugin.  Instructions on how to deploy the '''flannel''' plugin can be found [[Kubernetes Flannel Network Plugin|here]].
Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the '''kube-router''' plugin.  Instructions on how to deploy the '''flannel''' plugin can be found [[Kubernetes Flannel Network Plugin|here]].


The Flannel pod networking can be installed directly from the repository using this command:
Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router.  This plugin implements both the pod network and the proxy functions -- so the ''kube-proxy'' daemonset will be deleted as part of the installation. 


kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Since the default parameter set needs to be adjusted, download the yaml manifest:


On Kubernetes v1.16+, if there are issues (specifically, nodes don't reach the '''''Ready''''' state), try this specific version:
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml


kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
Edit the file to remove the <pre>--run-firewall=true</pre> parameter, then apply it to the cluster:


By default, flannel uses the primary network interface (eth0 on 10.0.0.0/24); but if you want to push the traffic onto the secondary interfaces (such as for the development cluster), download the YAML manifest and add one parameter to the argument list on/about line 190 (in the spec -> template -> spec -> containers [where name == 'kube-flannel-ds-amd64'] -> args). The result should look like this:
kubectl apply -f kubeadm-kuberouter-all-features.yaml


...
Also, as mentioned above, we need to shut down and clean up the kube-proxy pods:
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
resources:
...


substitute the secondary interface name for 'eth1' if it is different. Then apply the manifest:
kubectl -n kube-system delete ds kube-proxy
docker run --privileged -v /lib/modules:/lib/modules --net=host k8s.gcr.io/kube-proxy-amd64:v1.15.1 kube-proxy --cleanup


  kubectl apply -f kube-flannel.yaml
This will implement all routing onto the network interface that supports the default route for the host systems. If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing:  Change the default IP address for the node -- at least as far as the kubelet daemon is concerned.  The brute force way of doing this is to modify the <pre>/var/lib/kubelet/kubeadm-flags.env, adding one parameter inside the quoted string:


Other configuration options can be passed to flannel per [https://github.com/coreos/flannel/blob/master/Documentation/configuration.md this page]
KUBEADM_EXTRA_ARGS=" ..... --node-ip=10.1.0.99"
 
Then restart kubelet and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network).  Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...


== Enable pods to run on master node if desired ==
== Enable pods to run on master node if desired ==

Revision as of 21:59, 26 December 2019

The kubernetes controller install is based on the prerequisites in the Kubernetes Cluster Installation page.

Install the kubernetes repo

CentOS 7

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF 

... or just copy it from an already installed kubernetes node ...

Install the pieces of the kubeadm installation on all nodes

yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes

Debian 10

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Install the master

Make sure that /etc/sysconfig/kubelet (or /etc/default/kubelet for Debian) has the following line:

KUBELET_EXTRA_ARGS=--authentication-token-webhook --fail-swap-on=false --feature-gates=DevicePlugins=true --kubelet-cgroups=/systemd/system.slice
systemctl start kubelet
systemctl enable kubelet

Initialize master with parameter to support flannel network (as root)

kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl 0  --ignore-preflight-errors Swap --node-name `hostname -s`

If you want the master to not use the default network interface for inter-node communication, specify the following parameter on the kubeadm init command (per this page):

--apiserver-advertise-address A.B.C.D

Initialize authentication for kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Install pod network

Pod networking is provided by an external plugin implementing the CNI specification; this section describes the deployment of the kube-router plugin. Instructions on how to deploy the flannel plugin can be found here.

Full instructions on how to install and use this plugin can be found at https://github.com/cloudnativelabs/kube-router. This plugin implements both the pod network and the proxy functions -- so the kube-proxy daemonset will be deleted as part of the installation.

Since the default parameter set needs to be adjusted, download the yaml manifest:

wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml

Edit the file to remove the

--run-firewall=true

parameter, then apply it to the cluster:

kubectl apply -f kubeadm-kuberouter-all-features.yaml

Also, as mentioned above, we need to shut down and clean up the kube-proxy pods:

kubectl -n kube-system delete ds kube-proxy
docker run --privileged -v /lib/modules:/lib/modules --net=host k8s.gcr.io/kube-proxy-amd64:v1.15.1 kube-proxy --cleanup

This will implement all routing onto the network interface that supports the default route for the host systems. If, however, there is a secondary network that is targeted for cluster communications, we need to do one more thing: Change the default IP address for the node -- at least as far as the kubelet daemon is concerned. The brute force way of doing this is to modify the

/var/lib/kubelet/kubeadm-flags.env, adding one parameter inside the quoted string:

 KUBEADM_EXTRA_ARGS=" ..... --node-ip=10.1.0.99"

Then restart kubelet and it will be happy, and (eventually) everything else will be happy too -- though some pods may need to be manually restarted (such as the kube-router pods that get started as soon as the node is joined to the network).  Ideally we need to find a way to set this during the kubeadm join process ... stay tuned ...

Enable pods to run on master node if desired

kubectl taint nodes --all node-role.kubernetes.io/master-

Deploy the NVIDIA device controller

If the nodes have GPUs that are to be accessible and scheduled by kubernetes (i.e. dev cluster), then install the nvidia-device-plugin: kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml Complete instructions can be found at https://github.com/NVIDIA/k8s-device-plugin