k3d — Kubernetes Up and Running Quickly

Brett Mostert
4 min readDec 20, 2020

If you have ever set up Kubernetes the Hard Way (and you totally should, if only for the learning experience), you will know that it is a pain, and you do not want to do that every time you need a cluster for testing or development…

Disclaimer: I am still learning the ins and outs of Kubernetes, I write these articles to learn and to hopefully help others as well, comments and constructive criticism is welcome.

TLDR

There are plenty of tools, k3d is awesome and lightweight as it simply wraps k3s.

Install

curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

Create Cluster (default)

k3d cluster create mycluster -p 9080:80@loadbalancer

Delete Cluster

k3d cluster delete mycluster

For more, carry on reading…

Enter the tooling

There are plenty of tools to make our lives easier, here are a few…

Now I am not going to tell you about each one of them, they do a pretty damn good job of that themselves, I am rather going to tell you why I chose k3d.

Why I chose K3d

I chose k3d for the following reasons…

  • I can run it pretty much anywhere docker runs, including in WSL2
  • I don’t need bloated VM’s
  • I can run a more realistic environment (ie multi-node)
  • It can be used for local dev/testing environments
  • It’s just a wrapper for k3s, which means if I want to move over to k3s it's not a far leap (or at least that’s what I choose to believe)
  • k3s is simple and lightweight as it's built for IoT and Edge Computing
  • Most importantly, k3d is dead simple to use

Getting Started

For more alternative & detailed instructions check out the documentation at k3d.io

Requirements

  • Docker (Yup, that’s it)

Installation

curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

Setting Up a Single Server Node (defaults)

k3d cluster create mycluster -p 9080:80@loadbalancer

the -p 9080:80@loadbalancer is mapping port 9080 to dockers port 80 .

Let’s check the cluster list

k3d cluster list

which will result in the following

NAME        SERVERS   AGENTS   LOADBALANCER
mycluster 1/1 0/0 true

Managing KubeConfigs

To get the kubeconfig you can run the following

k3d kubeconfig write mycluster

The above will result in the kubeconfig being exported to $HOME/.k3d/kubeconfig-mycluster.yaml.

The above is super useful as you can then copy it to remote workstations to manage Kubernetes remotely.

Setting the current context

export KUBECONFIG=$HOME/.k3d/kubeconfig-mycluster.yaml

Checking out the Cluster

Run the following to check the cluster information

kubectl cluster-info

which will result in something like this…

Kubernetes control plane is running at https://0.0.0.0:42713
CoreDNS is running at https://0.0.0.0:42713/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:42713/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

Run the following to see a list of nodes

kubectl get nodes

while will result in the following…

NAME                     STATUS   ROLES    AGE   VERSION
k3d-mycluster-server-0 Ready master 13m v1.19.4+k3s1

So what gets created, besides the k3s cluster?

Well in docker you will see 2 containers, namely the LoadBalancer (in front of the server nodes) and the k3s Server Node. Run the command below to check it out.

docker ps

The LoadBalancer will allow access to the cluster from your local and external network address by “load balancing” to the nodes. — Nifty huh?

Let’s Deploy Something!

Right, let’s deploy something other than Nginx, lets deploy “echoserver”.

Deployment

Create a deployment.yaml file as per below…

and then run

kubectl apply -f ./deployment.yaml

to deploy it, once deployed, run

kubectl get deployments

and you should see something like this

NAME         READY   UP-TO-DATE   AVAILABLE   AGE
echoserver 0/2 2 0 32s

run the same command until you get…

NAME         READY   UP-TO-DATE   AVAILABLE   AGE
echoserver 2/2 2 2 2m20s

as you can see mine took around 2 minutes to complete. You could also check the ReplicaSet and Pods by running the following…

kubectl get rs
kubectl get pods --show-labels

Service, let's expose it

Create a service.yaml file as per below…

and then run

kubectl apply -f service.yaml

to deploy the service, once deployed run,

kubectl get services

and you should get the following…

kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP   127m
echoserver ClusterIP 10.43.233.143 <none> 80/TCP 9s

right, now that it's exposed internally in the cluster, let's expose it externally

Ingress, expose this thing externally

Create an ingress.yaml file as per below…

and then run

kubectl apply -f ingress.yaml

once completed, run the following

curl http://127.0.0.1:9080

and you should get something like this…

CLIENT VALUES:
client_address=('10.42.0.6', 36572) (10.42.0.6)
command=GET
path=/
real path=/
query=
request_version=HTTP/1.1
SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.5.0
protocol_version=HTTP/1.0
HEADERS RECEIVED:
Accept=*/*
Accept-Encoding=gzip
Host=127.0.0.1:9080
User-Agent=curl/7.69.1
X-Forwarded-For=10.42.0.1
X-Forwarded-Host=127.0.0.1:9080
X-Forwarded-Port=9080
X-Forwarded-Proto=http
X-Forwarded-Server=traefik-5dd496474-5lcdg
X-Real-Ip=10.42.0.1

Deleting the Cluster

k3d cluster delete mycluster

or, to delete all clusters (if you have multiple clusters)

k3d cluster delete --all

Something to try

If you want to try running a multi-node cluster, try replacing the k3d cluster create command with

k3d cluster create little-monster -s 1 -a 2 --port 8080:80@loadbalancer

which will create a cluster with 1 server node and 2 agent nodes.

Wrapping it up

I think that's about enough for this article… In other articles, I will be replacing the default Traefik Ingress Controller (v1) with Istio.

References

A list of resources I drew information and inspiration from…

--

--

Brett Mostert

Software Engineer — I solve problems, mostly. Interested in AWS, GCP, NodeJs, TypeScript, Golang, Python, the whole #! (sha-bang) of Software Engineering.