IP Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 5: Line 5:
The basic load balancer is deployed with a simple manifest:
The basic load balancer is deployed with a simple manifest:


  <nowiki>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.2/manifests/metallb.yaml
  <nowiki>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
# On first install only
openssl rand -base64 128 | kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=-</nowiki>
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"</nowiki>


At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out.  The controller uses a ConfigMap to define address pools, allocation methods, and routing protocols.  There are multiple definitions in the [https://gitlab.williams-net.org/k8s/k8s-admin k8s-admin repository on GitLab], use the one appropriate to the system:
At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out.  The controller uses a ConfigMap to define address pools, allocation methods, and routing protocols.  There are multiple definitions in the [https://gitlab.williams-net.org/k8s/k8s-admin k8s-admin repository on GitLab], use the one appropriate to the system:

Revision as of 00:56, 24 March 2020

In order for kubernetes services to be accessible outside the cluster, they need to have externally routable IP addresses. While it would seem that this is a basic need that should be provided by kubernetes itself, its implementation is actually very dependent on the underlying platform -- and therefore left for the provider to implement.

We are using the bare metal load balancer called MetalLB. It is a full featured loadbalancer and IP controller that has multiple modes of operation. As deployed here, it is using simple Layer 2 operation, advertising IP addresses using the existing network interfaces.

The basic load balancer is deployed with a simple manifest:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out. The controller uses a ConfigMap to define address pools, allocation methods, and routing protocols. There are multiple definitions in the k8s-admin repository on GitLab, use the one appropriate to the system:

filename IP range
metallb-config-dev.yaml 10.0.0.200 - 10.0.0.240
metallb-config-prod.yaml 10.0.0.110 - 10.0.0.127
metallb-config-test.yaml 10.0.0.190 - 10.0.0.197

Apply the correct pool definition to get things working:

kubectl apply -f <filename>

This loadbalancer implementation supports the requesting of IP addresses through the loadBalancerIP parameter in the service -- if the address is available, it will allocate it. The externalTrafficPolicy parameter is also respected.

See the home page for the MetalLB package for complete installation and configuration options.


<references/>