CloudNativePG – Postgres on Kubernetes with ease

Marcel Glaser
10. May 2024
Reading time: 3 min
CloudNativePG – Postgres on Kubernetes with ease

KubeCon 2024 showcased a range of intriguing new features, tools, and mechanisms, with a particular emphasis on AI. But one thing in this big AI jungle, that stood out to me – and does not have anything to do with AI – is CloudNativePG

If you have ever attempted to set up a database within your Kubernetes cluster, you are likely familiar with the challenges and complexities involved but moreover know the pain and struggle that comes with it. Luckily, today we can rely on operators, that will make our life way easier. 

CloudNativePG is an open source Level 5 operator for managing PostgreSQL workloads on any Kubernetes cluster. It has been open source and under the auspices of the Cloud Native Foundation since 2022. With CloudNativePG, the deployment, management, and recovery of your PostgreSQL instances on Kubernetes are simplified. It leverages Kubernetes’ CRDs, making cluster deployment as straightforward as applying a .yaml file to your Kubernetes cluster.

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: my-example-cluster
  namespace: cnpg
spec:
  instances: 3
  
  storage:
    size: 1Gi

.yaml of Cluster Custom Resource

By applying this .yaml-file, after you have installed CloudNativePG, a “Cluster”-Resource will be created in your cluster.  

The created cluster 

Since we used 3 instances, we will detect that 3 pods are running, and 3 services have been created in my namespace “cnpg”: 

Running Pods
New services created 

How does CNPG work? 

After creating a cluster-resource, the operator will spin up the specified number of instances as Pods. One of these will be made the PRIMARY of your Postgres-Cluster. The service with the “-rw” suffix will now be attached to the Pod, by making use of the “role=primary” label.

The other Pods are replicas, they will be attached to the Read-Only service. 

RW-service is only connected to primary

What happens when my primary will fail? 

If a pod or worker fails, Kubernetes will detect this within seconds. The operator will then promote one of the synchronous standby pods to become the new primary. Should the failed pod come back online, it will not regain its primary status unless the new primary also fails.

How does backup and recovery work? 

CNPG does its backups in two ways: 

  • Archiving the WAL-files at a maximum of every 5 minutes 
  • Base backup (Kubernetes Volume Snapshot or using an Object Store) 

If you need to recover your database, you will have to create a new cluster and have the primary point to your backup. By running the Postgres restore-command, the new created cluster will promote itself after the target is reached. 

Recovery with a second cluster

The cool thing here is: you can do this across multiple availability zones and across data centers. As a result, you will make the earth your single point of failure. 

I know, this article just scratches the surface of CNPG and its capabilities. If this has sparkked your interest, I encourage you to explore further and take a closer look at it. In my opinion, the simplicity offered is unbeatable. And if you have a background as a database admin that wants to migrate to Kubernetes, you will be VERY happy using CloudNativePG!