IP Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
(Created page with "In order for kubernetes services to be accessible outside the cluster, they need to have externally routable IP addresses. While it would seem that this is a basic need that...")
 
m (updates for newer version of metallb)
 
(6 intermediate revisions by the same user not shown)
Line 5: Line 5:
The basic load balancer is deployed with a simple manifest:
The basic load balancer is deployed with a simple manifest:


  <nowiki>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml</nowiki>
  <nowiki>kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml
</nowiki>


At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out.  The controller uses a ConfigMap to define address pools, allocation methods, and routing protocols.  There are multiple definitions in the [https://gitlab.williams-net.org/k8s/k8s-admin k8s-admin repository on GitLab], use the one appropriate to the system:
At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out.  The controller uses a Custom Resource Definition to define address pools, allocation methods, and routing protocols.  There are multiple definitions in the [https://gitlab.williams-net.org/k8s/k8s-admin k8s-admin repository on GitLab], use the one appropriate to the system:


{| class="wikitable"
{| class="wikitable"
Line 13: Line 14:
! filename !! IP range
! filename !! IP range
|-
|-
| metallb-config-dev.yaml || 10.0.0.200 - 10.0.0.240
| ipaddresspool-prod.yaml || 10.0.0.110 - 10.0.0.127
|-
| metallb-config-prod.yaml || 10.0.0.110 - 10.0.0.127
|-
| metallb-config-test.yaml || 10.0.0.190 - 10.0.0.197
|}
|}



Latest revision as of 01:03, 10 February 2024

In order for kubernetes services to be accessible outside the cluster, they need to have externally routable IP addresses. While it would seem that this is a basic need that should be provided by kubernetes itself, its implementation is actually very dependent on the underlying platform -- and therefore left for the provider to implement.

We are using the bare metal load balancer called MetalLB. It is a full featured loadbalancer and IP controller that has multiple modes of operation. As deployed here, it is using simple Layer 2 operation, advertising IP addresses using the existing network interfaces.

The basic load balancer is deployed with a simple manifest:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml

At this point, the controller itself is active, but it has nothing to do -- it doesn't have any IP addresses to hand out. The controller uses a Custom Resource Definition to define address pools, allocation methods, and routing protocols. There are multiple definitions in the k8s-admin repository on GitLab, use the one appropriate to the system:

filename IP range
ipaddresspool-prod.yaml 10.0.0.110 - 10.0.0.127

Apply the correct pool definition to get things working:

kubectl apply -f <filename>

This loadbalancer implementation supports the requesting of IP addresses through the loadBalancerIP parameter in the service -- if the address is available, it will allocate it. The externalTrafficPolicy parameter is also respected.

See the home page for the MetalLB package for complete installation and configuration options.


<references/>