News

Why run PostgreSQL on Kubernetes?

Why run PostgreSQL on Kubernetes?

More and more teams are exploring how to run stateful applications, especially databases, on Kubernetes. And there’s a good reason for the interest. Our recent hands-on workshop on running PostgreSQL on Kubernetes filled up fast, confirming just how hot the topic has become. So what’s driving this shift? Why run PostgreSQL on Kubernetes? And what does it take to run PostgreSQL on Kubernetes the right way?

To find out, we speak with two specialists: Guy Gyles, Senior DBA and Tech Lead at Zebanza, and Dennis Grigaliunas, Technical Sales Engineer at our partner Piros, specialists in open-source platforms such as OpenShift.

In this blog, you’ll discover:

  • Why Kubernetes has become a strong, production-ready platform for running stateful databases like PostgreSQL
  • How operators such as CloudNativePG act as a “robot DBA,” automating deployment, failover, and lifecycle management
  • How automation is reshaping the DBA role, shifting focus from repetitive tasks to high-value expertise

Why is the shift happening now?

For years, databases and containers didn’t mix well. “Databases must store data persistently,” Guy explains. “You can’t treat them like stateless apps that can be shut down and restarted at will. The tooling simply wasn’t mature enough for production.”

That has changed. Kubernetes, and especially the rise of database operators, has evolved to the point where enterprises can run databases without compromising reliability or performance.

Dennis adds that modern applications are driving the shift: “We’ve moved beyond monolithic architectures. Today’s cloud-native apps are built for portability and scalable growth, and their databases need to keep up. When you decouple your database from your application, each layer can scale independently.”

This new architecture offers two major benefits:

  1. High availability and self-healing: If a container fails, Kubernetes automatically brings up a replacement.
  2. Portability: Platforms like OpenShift allow the same configuration across on-premise, AWS, Azure, and more, helping avoid vendor lock-in and enabling true workload mobility.

What do Kubernetes operators do?

Operators are the key to running production-grade databases on Kubernetes. Think of an operator as a highly skilled, automated DBA embedded in your cluster.

Our preferred choice is CloudNativePG, an open-source operator created by EDB.

“You don’t tell an operator what to do step-by-step,” Guy notes. “You give it a blueprint, a YAML file describing your production-ready PostgreSQL cluster, and it handles the implementation.”

But the real power of the operator is what happens afterward: If the primary database experiences an issue, the operator automatically triggers failover to an up-to-date replica—often in under a minute—with no data loss and minimal user impact.

Maarten points out that the price varies from customer to customer: “For small organizations, the overhead costs are relatively high. But large companies with a lot of external hosting make a saving by switching to OpenShift. And for those who really need to purchase a lot more computing power and memory, MMCATS offers a resource-sharing solution or even the option to take over the management of Maximo.”

What does “production-ready” actually mean?

Running PostgreSQL in production means meeting demanding SLAs, maintaining security, and ensuring consistent reliability.

For mission-critical workloads, especially in industries like finance or healthcare, the architecture must be able to guarantee zero data loss. “You can’t lose a single transaction,” Guy emphasizes. “That means synchronous replication and at least three nodes: one primary and two replicas.”

But architecture alone isn’t enough. Dennis explains: “When a critical bug hits, you can’t rely on community forums. Real SLAs require enterprise-grade support.”

That’s why we recommend combining:

  • Red Hat OpenShift (enterprise Kubernetes), and
  • EDB Postgres (enterprise-supported PostgreSQL)

This ensures the entire stack, from infrastructure to database, is backed by experts.

Will this replace DBAs?

Short answer: No.

“The DBA role isn’t disappearing, it’s evolving,” Guy says. Automation eliminates repetitive tasks, allowing DBAs to focus on design, tuning, performance optimization, and solving complex data challenges.

Some challenges, however, are organizational rather than technical. Dennis notes: “The Kubernetes knowledge gap is still large, and internal culture can be a bigger blocker than technology. Teams need a DevOps mindset where sysadmins, developers, and DBAs collaborate closely.”

Database expertise meets platform power

This is exactly why the partnership between Zebanza and Piros works so well.

“We’re two sides of the same open-source coin,” Dennis explains. “We deliver cloud-ready platforms like OpenShift, but applications need strong, reliable databases.” Guy agrees: “We understand databases inside and out, but they depend on solid infrastructure. Working together lets us deliver full, end-to-end solutions.”

Ready to get started?

If you’re considering PostgreSQL on Kubernetes, both Guy and Dennis offer simple advice: Learn the fundamentals, and don’t wait too long to begin.

“Get to know the concepts first,” Guy suggests. “And be open to the new technology,” Dennis adds. “Containerization is where the industry is headed. Sooner or later, your most important applications will run this way, and you’ll want to be prepared.”

Ready to explore how PostgreSQL on Kubernetes can transform your data infrastructure? Let’s take the next step together.

Do you want to know more about our services? Discover them here.

Share

More things you might be interested in