K8s Home Lab - GitOps with FluxCD

Welcome to Part III of my K8s home lab blog series! At this point, we have a fully functioning Kubernetes cluster capable of running workloads. Now, instead of manually deploying Helm charts or Kubernetes manifests into our cluster, we’re going to adopt a more streamlined and efficient method: the GitOps approach.
What is GitOps?
GitOps is an operational framework that leverages DevOps practices such as Continuous Integration (CI), Continuous Delivery (CD), and version control. The core idea revolves around maintaining the desired state of your infrastructure in a Git repository, enabling changes to be applied simply by pushing updates to the repository.
In our case, this means having a dedicated Git repository to store and manage our cluster configuration. This repository will define everything from the applications running on the cluster to monitoring tools and infrastructure management. By adopting GitOps, any changes to the cluster—whether it’s adding new resources, updating existing ones, or removing unused components—can be applied automatically by pushing the updated configurations to the Git repository.
This approach eliminates the need for manual deployment and resource management, making the process more efficient, reliable, and consistent. With GitOps, your infrastructure is managed declaratively, providing a single source of truth that ensures the cluster’s state matches the repository’s configuration.
GitOps in Kubernetes
When it comes to GitOps, we could use a variety of tools to make this happen. Among the most popular are ArgoCD and FluxCD. ArgoCD has gained significant traction over the years due to its user-friendly UI that allows you to monitor application health, logs, and details. However, for this guide, we’ll explore FluxCD due to its simplicity, lightweight nature, and adaptability to our use case.
With FluxCD, we can use a public GitHub repository to store the desired state of our cluster. This not only enables transparency but also demonstrates what we’re hosting and how we’re managing it.
FluxCD Setup
According to the official FluxCD documentation:
Flux is constructed with the GitOps Toolkit components, which is a set of:Specialized tools and Flux controllersComposable APIsReusable Go packages for GitOps under the fluxcd GitHub organization
for building Continuous Delivery on top of Kubernetes.
When we bootstrap FluxCD in our cluster, these components are installed and configured to sync the cluster state with a repository, such as GitHub.
Bootstrapping FluxCD
Before starting, ensure you have the following:
- A functional Kubernetes cluster: This serves as the platform for your workloads.
- GitHub Access Token: Required for FluxCD to interact with your repository.
- FluxCD CLI: Install the CLI by following the FluxCD installation guide.
Step 1: Export GitHub Credentials
Export your GitHub credentials to authenticate FluxCD with GitHub:
export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
Step 2: Structuring the Repository
To maintain a clean and scalable setup, organize your repository as a monorepo. This structure consolidates all configurations and resources into a single location, making it easier to manage multiple clusters. My GitHub repository, homelab, follows this structure, as recommended by the FluxCD documentation:
├── apps
│ ├── base
│ ├── production
│ └── staging
├── infrastructure
│ ├── base
│ ├── production
│ └── staging
└── clusters
├── production
└── staging
This structure helps separate general configurations, infrastructure, and cluster-specific configurations for better management.
Step 3: Pre-Check with FluxCD
Run the following command to ensure all prerequisites are met:
flux check --pre
If the output resembles this, you’re ready:
► checking prerequisites
✔ Kubernetes 1.31.4+k3s1 >=1.28.0-0
✔ prerequisites checks passed
Step 4: Bootstrapping FluxCD
Bootstrap FluxCD with the following command:
flux bootstrap github \
--owner=$GITHUB_USER \
--repository=homelab \
--branch=main \
--path=./clusters/kthulu \
--personal
This command:
- Installs FluxCD controllers into your Kubernetes cluster.
- Creates a Git repository named homelab under your GitHub account.
- Adds FluxCD component manifests to the repository.
- Configures FluxCD to track the
./clusters/kthulu
directory.
Deploying Your First Application
With FluxCD operational, let’s deploy a sample application—linkding
, a self-hosted bookmark manager.
Application Directory Structure
Organize your repository as follows:
├── apps
│ ├── base
│ └── kthulu
Each application will have its own directory under apps/base
, while environment-specific settings will reside under apps/kthulu
.
Defining Manifests
Namespace
Create the namespace for the application:
apps/base/linkding/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: linkding
Deployment
Define the deployment for linkding
with a Persistent Volume Claim:
apps/base/linkding/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: linkding
spec:
replicas: 1
selector:
matchLabels:
app: linkding
template:
metadata:
labels:
app: linkding
spec:
securityContext:
runAsUser: 33
runAsGroup: 33
fsGroup: 33
volumes:
- name: linkding-data-pvc
persistentVolumeClaim:
claimName: linkding-data-pvc
containers:
- name: linkding
image: sissbruecker/linkding:1.31.0
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- name: linkding-data-pvc
mountPath: /etc/linkding/data
ports:
- containerPort: 9090
Persistent Volume Claim
Allocate 1Gi of storage:
apps/base/linkding/persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linkding-data-pvc
spec:
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
Service
Expose the application internally:
apps/base/linkding/service.yaml
apiVersion: v1
kind: Service
metadata:
name: linkding
spec:
selector:
app: linkding
ports:
- port: 9090
type: ClusterIP
Kustomization
Aggregate all manifests in a kustomization.yaml
file, this instruct FluxCD what resources to create on the cluster:
apps/base/linkding/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: linkding
resources:
- namespace.yaml
- deployment.yaml
- persistent-volume-claim.yaml
- service.yaml
Cluster-Specific Configuration
Add environment-specific configurations under apps/kthulu
, in the next blog post this will be helpful to add an ingress resource that's specific to this cluster:
apps/kthulu/linkding/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: linkding
resources:
- ../../base/linkding
Finally, reference this in the cluster configuration, this is a one time thing, all new apps defined on apps/kthulu/
will be synced in the future by this file:
clusters/kthulu/apps.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./apps/kthulu
prune: true
wait: true
timeout: 5m0s
Deploy and Verify
Commit and push the changes:
git add .
git commit -m "feat: Adding Linkding"
git push origin main
Verify the sync status:
➜ flux get kustomizations
NAME REVISION SUSPENDED READY MESSAGE
apps main@sha1:1e2a851a False True Applied revision: main@sha1:1e2a851a
flux-system main@sha1:1e2a851a False True Applied revision: main@sha1:1e2a851a
This means that the resources were synced to the cluster, lets verify that:
➜ kubectl get all -n linkding
NAME READY STATUS RESTARTS AGE
pod/linkding-75474cfb98-79j7g 1/1 Running 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/linkding ClusterIP 10.43.192.99 <none> 9090/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/linkding 1/1 1 1 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/linkding-75474cfb98 1 1 1 23h
And just like that, we’ve deployed our first application using FluxCD! The beauty of this approach lies in its simplicity and security. Developers or teams deploying applications no longer need direct access to the cluster itself. Instead, they simply push configuration changes to the designated Git repository. This not only ensures stricter access controls but also streamlines the entire deployment process.
Additionally, migrations and disaster recovery become effortless. For example, if I were to lose my cluster today, I could simply reinstall k3s, bootstrap it with my existing repository, and voilà—my entire cluster setup would be back online in no time. It’s a game-changer, and one of the many reasons I’m passionate about Kubernetes.
Conclusion
We’ve successfully set up GitOps with FluxCD and deployed our first application. GitOps brings declarative, automated, and version-controlled cluster management to life. By leveraging FluxCD, we’ve achieved consistent, reliable deployments while reducing complexity.
Looking ahead, our next blog post will focus on securely exposing services from your Kubernetes cluster using Cloudflared tunnels. We’ll dive into how to keep your services accessible while maintaining robust security. Stay tuned—it’s going to be an exciting exploration!