DEV Community

Andrzej Górski
Andrzej Górski

Posted on • Originally published at andrzejgor.ski on

Big PostgreSQL Problems: Introduction

This is the first post in a series about problems that I encountered while working with rather big PostgreSQL databases. I will describe in it what are my assumptions about the database size and also some facts from PostgreSQL documentation.

How big table or database have to be to be considered really “big”?

There’s no hard definition of that topic, so there are only my assumptions.

About a single table, I’d say that it can be considered big when it approaches 100 million rows. But of course, it depends on the row size itself. Things will be completely different for a table with two or three simple integers contrary to, let’s say twenty text columns loaded with heavy data.

On the database topic – in my opinion, 100 GB is a quite large database. But , again, it depends, on how many tables are there and how heavy single rows are.

What are the PostgreSQL limits?

Now we are in a much better situation, as they’re documented in the official PostgreSQL FAQ. So, after the FAQ:

  1. Maximum size for a database? Unlimited (32 TB databases exist)
  2. Maximum size for a table? 32 TB
  3. Maximum size for a row? 400 GB
  4. Maximum size for a field? 1 GB
  5. Maximum number of rows in a table? unlimited
  6. Maximum number of columns in a table? 250-1600, depending on column types
  7. Maximum number of indexes on a table? unlimited

That’s all for today

It wasn’t a very long post, was it? But no worries, the next articles in the series will be more substantial (at least I hope so 🙂 ). This article is more like a common point to have something to refer to in subsequent posts.

Top comments (0)