Running a Kubernetes Cluster at Home | Part I

When you think about a Kubernetes cluster with multiple master nodes and worker nodes, a picture like this might come to mind:

This setup is typical for cloud vendors or companies running on-premises servers to host their own Kubernetes clusters. But maybe this image didn’t even cross your mind if you’ve interacted with Kubernetes through a cloud provider. And that’s perfectly fine—after all, the real magic of Kubernetes lies in its ability to abstract away the underlying hardware. With just a few yaml
definitions, you can create services, persistent volumes, and more without worrying about the machines themselves.
There’s nothing wrong with that approach—it’s what makes Kubernetes so powerful! But for me, curiosity often keeps me up at night. I started wondering: what would it take to build and run a highly available (HA) Kubernetes cluster at home? Not on a cloud provider, but on my own hardware.
So, I decided to dive in and embrace the challenge. My goal? To set up a Kubernetes HA cluster at home, iterate on the process, and document every step along the way. This isn’t about meeting a deadline or delivering a perfect product—it’s about exploring, learning, and sharing my approach to solving engineering problems.
Planning for Success: Defining the Workload
Before jumping into the hardware setup, I wanted to get a clear sense of what I planned to run on my cluster. After all, this is more than an exercise in hardware tinkering—I wanted to deploy applications, perform experiments, and simulate real-world scenarios like monitoring SLAs and troubleshooting outages. Here’s a shortlist of what I envisioned for my cluster:
- Prometheus: For system monitoring and metrics collection, tracking everything from node resource usage to pod health.
- Grafana: To visualize data from Prometheus and create dashboards for an at-a-glance understanding of the cluster’s status. (Sure, I could write PromQL queries all day, but as Slavoj Žižek might say: "I prefer not to.")
- FluxCD: A lightweight GitOps tool for automating deployments. Having used ArgoCD before, I wanted to explore Flux as a simpler alternative.
- Linkding: A minimal, fast, and self-hosted bookmark manager.
- Pi-Hole: A network-wide ad-blocker.
- Librum: A self-hosted e-book reader with AI features and annotation capabilities.
- Ghost: An open-source publishing platform. Ideally, you’d be reading this post on my self-hosted Ghost instance, but… baby steps.
- Databases: MySQL, MariaDB, and others for managing application backends.
- External-dns: To automate DNS record updates for cluster-exposed services.
- Cert-manager: To manage and automate SSL/TLS certificates.
Given this lineup, the resource requirements weren’t enormous. Even an old laptop could serve as a single-node Kubernetes setup and handle some of these applications. However, I wanted to push the boundaries and aim for a multi-node setup to simulate a more realistic HA cluster and also I'm sure that I'll be deploying extra stuff into the cluster, but at this point in time I'm not sure about what those applications would be.
The Hardware: Finding the Right Setup
For my home lab, I started with an HP Z900 workstation, which features:
- Two Intel Xeon X5650 processors (12 cores total)
- 12 GiB of RAM
While this was a solid starting point, I wanted more capacity and redundancy. After some browsing on Facebook Marketplace, I found affordable mini-PCs with these specs:
- Intel Core i5 (8th gen, 6 cores)
- 16 GiB of RAM
- 256 GB of storage
With these mini-PCs, combined with my workstation, my total cluster resources added up to:
- 60 GiB of RAM
- 30 CPU cores
This setup strikes a balance between cost, performance, and energy efficiency, making it perfect for a home Kubernetes cluster.
How does the cluster looks like? Well, here's a picture:

Why Bother with a Home Kubernetes Cluster?
You might wonder: Why go through all this effort when cloud providers already offer managed Kubernetes solutions? For me, it’s all about the learning experience. Building a cluster from scratch—designing, deploying, monitoring, and troubleshooting it—offers insights that cloud abstraction often hides. It’s a hands-on way to deepen my understanding of Kubernetes and improve my skills as a systems engineer.
In future posts, I’ll cover the installation, configuration, and optimization of my home Kubernetes cluster, including the challenges I faced and the lessons I learned. If you’re curious about Kubernetes or just love tinkering with hardware, stay tuned—I’ll share everything I learn along the way.