Skip to main content
  1. Posts/

k3d + metalLB homelab k8s cluster

·4 mins· 0 · 0 ·
homelab kubernetes k3d metalLB
Marko Slipogor
Author
Marko Slipogor
Table of Contents

After finishing with this guide you will have fully working homelab setup of multi node kubernetes cluster capable of spinning up service type LoadBalancer working in a k3d cluster using MetalLB.

Without MetalLB or any similar software solution — we are talking only software solutions here — the External IP of any newly created services in Kubernetes will stay indefinitely in pending state. MetalLB’s purpose is to cover this deficit by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters work in a similar way as their equivalents in IaaS platform providers.

This guide complements MetalLB installation docs, and sets up MetalLB using layer2 protocol. With Docker on Linux, you can send traffic directly to the loadbalancer’s external IP if the IP space is within the docker IP space.

On macOS and Windows, docker does not expose the docker network to the host. Because of this limitation, containers (including k3d nodes) are only reachable from the host via port-forwards, however other containers/pods can reach other things running in docker including loadbalancers.

Prerequisites: #

You need to install those in order to proceed with guide. Click to see how to install them.

Step-by-step setup #

K3D #

Fisrt let’s install K3D and you can install it by running:

brew install k3d

Using this command we will create our local k8s cluster:

k3d cluster create k3d-k8s \
  --api-port 6550 \
  --agents 3 \
  --no-lb \
  --k3s-arg '--disable=traefik@server:*'

Let’s break down that command:

k3d cluster create k3d-k8s:

  • k3d is the command-line tool for managing K3s clusters.

  • cluster create is the subcommand used to create a new K3s cluster.

  • k3d-k8s is the name given to the newly created cluster. You can choose any name you prefer for your cluster.

  • --api-port 6550:

    • --api-port specifies the port on which the Kubernetes API server will listen. In this case, it’s set to port 6550. By default, the Kubernetes API server listens on port 6443, but you can specify a different port if needed.
  • --agents 3:

    • --agents specifies the number of agent nodes in the cluster. In this case, it’s set to 3, which means that the cluster will have 3 worker nodes in addition to the master node.
  • --no-lb indicates that you do not want to create an external load balancer for the cluster. By default, K3d creates a load balancer to expose the Kubernetes API server externally. This flag disables that feature.

  • --k3s-arg '--disable=traefik@server:*':

    • --k3s-arg allows you to pass additional arguments to the K3s runtime during cluster creation.
    • --disable=traefik@server:* is an argument passed to K3s to disable the Traefik ingress controller. Traefik is a popular Kubernetes ingress controller, and in this case, it’s being disabled for this cluster. The @server:* part means disabling Traefik for all server nodes.

MetalLB #

MetalLB fills the role of a network load balancer in Kubernetes environments where traditional cloud-based load balancers are not applicable. It manages IP address allocation, ARP announcements, health checks, and load balancing for services of type “LoadBalancer” within your cluster, making it possible for external traffic to reach your Kubernetes applications running on-premises or in bare-metal setups.

Let’s install MetalLB by running following command against our k3d-k8s cluster:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml

We are going to use Layer 2 configuration. Layer 2 mode is the simplest to configure and we are going to need only a range of IP addresses. As explained in the official documentation:

Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

Run the following command to get our docker network:

docker network inspect k3d-k8s | jq '.[0].IPAM.Config[0].Subnet' | tr -d '"'

The output will contain a cidr such as 172.19.0.0/16. We want our loadbalancer IP range to come from this subclass. We can configure MetalLB, for instance, to use 172.19.255.1 to 172.19.255.250 by creating the IPAddressPool and the related L2Advertisement.

Let’s add that ip range to our config file (copy the following code):

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.1-172.19.255.250 # replace with your output
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - ip-pool
---

Save this code and name the file as ex. ip-pool.yaml Now let’s deploy manifest:

kubectl apply -f ip-pool.yaml

When MetalLB sets the external IP address of an ingress LoadBalancer service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service.

Test #

It’s time we test our workload. Let’s create a deployment of Nginx:

kubectl create deployment nginx --image=nginx

Expose the deployment using a LoadBalancer:

kubectl expose deployment nginx --port=80 --type=LoadBalancer

If everything is fine we should see external IP assigned to our service!

Related

Warp + zsh + starship
·4 mins· 0 · 0
terminal warp zsh starship
Resume
·1 min· 0 · 0
About
·1 min· 0 · 0