KubeCon 2024 showcased a range of intriguing new features, tools, and mechanisms, with a particular emphasis on AI. But one thing in this big AI jungle, that stood out to me – and does not have anything to do with AI – is CloudNativePG.
If you have ever attempted to set up a database within your Kubernetes cluster, you are likely familiar with the challenges and complexities involved but moreover know the pain and struggle that comes with it. Luckily, today we can rely on operators, that will make our life way easier.
CloudNativePG is an open source Level 5 operator for managing PostgreSQL workloads on any Kubernetes cluster. It has been open source and under the auspices of the Cloud Native Foundation since 2022. With CloudNativePG, the deployment, management, and recovery of your PostgreSQL instances on Kubernetes are simplified. It leverages Kubernetes’ CRDs, making cluster deployment as straightforward as applying a .yaml file to your Kubernetes cluster.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: my-example-cluster
namespace: cnpg
spec:
instances: 3
storage:
size: 1Gi
.yaml of Cluster Custom Resource
By applying this .yaml-file, after you have installed CloudNativePG, a “Cluster”-Resource will be created in your cluster.
Since we used 3 instances, we will detect that 3 pods are running, and 3 services have been created in my namespace “cnpg”:
After creating a cluster-resource, the operator will spin up the specified number of instances as Pods. One of these will be made the PRIMARY of your Postgres-Cluster. The service with the “-rw” suffix will now be attached to the Pod, by making use of the “role=primary” label.
The other Pods are replicas, they will be attached to the Read-Only service.
If a pod or worker fails, Kubernetes will detect this within seconds. The operator will then promote one of the synchronous standby pods to become the new primary. Should the failed pod come back online, it will not regain its primary status unless the new primary also fails.
CNPG does its backups in two ways:
If you need to recover your database, you will have to create a new cluster and have the primary point to your backup. By running the Postgres restore-command, the new created cluster will promote itself after the target is reached.
The cool thing here is: you can do this across multiple availability zones and across data centers. As a result, you will make the earth your single point of failure.
I know, this article just scratches the surface of CNPG and its capabilities. If this has sparkked your interest, I encourage you to explore further and take a closer look at it. In my opinion, the simplicity offered is unbeatable. And if you have a background as a database admin that wants to migrate to Kubernetes, you will be VERY happy using CloudNativePG!