Launching a Postgres database
Symbiosis currently doesn't offer managed databases. Instead, we suggest using a Kubernetes operator to manage the database for you.
CloudNativePG is an operator for creating low-maintenance and production-ready Postgres clusters in Kububernetes. CloudNativePG supports with operations such as:
- Automatic point-in-time backups
- Built-in monitoring
- Rolling updates
This page will show you how to install CloudNativePG and how you can launch a highly available Postgres with just a short and sweet custom resource.
CloudNativePG offers a wide range of features and options that we do not cover in this article, but you can find it in their official documentation.
Install the CloudNativePG operator and custom resource definitions using Helm:
helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --install cnpg \ --namespace cnpg-system \ --create-namespace \ cnpg/cloudnative-pg
Creating a Postgres cluster
Once the helm chart is installed, create a Postgres database with the below manifest:
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: example-postgres namespace: default spec: instances: 3 storage: size: 10Gi
This will provision a primary database with two read-replicas, each with 10GiB persistent storage.
Accessing your cluster
CloudNativePG will create a database called
app by default, with credentials to access the database stored in the secret
Four different services are created to connect to postgres:
<CLUSTER_NAME>-any- load balances between all postgres DBs in the clusters (both read and writeable)
<CLUSTER_NAME>-r- selects all readable instances
<CLUSTER_NAME>-rw- selects all read and writeable instances
<CLUSTER_NAME>-ro- selects instances that are only readable, but not writeable
These services are not publicly exposed, i.e they are only accessible within your k8s cluster.
You can then launch Pgweb, a minimal Web UI for accessing Postgres, by creating a deployment that uses the read-write services with the credentials stored in
apiVersion: apps/v1 kind: Deployment metadata: name: pgweb labels: app: pgweb spec: replicas: 1 selector: matchLabels: app: pgweb template: metadata: labels: app: pgweb spec: containers: - name: pgweb image: sosedoff/pgweb ports: - containerPort: 8081 env: - name: PSQL_USERNAME valueFrom: secretKeyRef: name: example-postgres-app key: username - name: PSQL_PASSWORD valueFrom: secretKeyRef: name: example-postgres-app key: password - name: DATABASE_URL value: postgres://$(PSQL_USERNAME):$(PSQL_PASSWORD)@example-postgres-rw:5432/app?sslmode=disable
We can access the UI by port-forwarding to the pod, since it doesn't have an Ingress or LoadBalancer configured:
kubectl port-forward deployment/pgweb 8081:8081
The interface should now be visible at
localhost:8081 and connected to your newly created postgres cluster.
Voilà! We've set up a highly-available Postgres cluster and connected to the primary DB using the provided credentials.
If you are using the Kube prometheus stack in your cluster you can easily export Postgres metrics:
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-storage-class spec: [...] monitoring: enablePodMonitor: true
PodMonitor will pull IO, transaction, storage and other various data directly into your prometheus instance.
CloudNativePG can be configured to store PIT (point-in-time) backups to various different sinks.
Backups require that you have s3-compatible storage, for example by launching a Minio instance into your cluster.
Configure backups with your access key and storage key:
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-storage-class spec: backup: barmanObjectStore: destinationPath: "<destination path here>" s3Credentials: accessKeyId: name: aws-creds key: ACCESS_KEY_ID secretAccessKey: name: aws-creds key: ACCESS_SECRET_KEY
Backups can be scheduled by creating a
ScheduledBackup resource that points to your cluster. The below configuration will create a backup every midnight UTC.
apiVersion: postgresql.cnpg.io/v1 kind: ScheduledBackup metadata: name: backup-example spec: schedule: "0 0 0 * * *" backupOwnerReference: self cluster: name: example-cluster
See their official documentation for how to restore your cluster from a backup.
CloudNativePG is a simple operator for running a production-ready Postgres database. However, there are many other alternatives, like these ones below: