DEV Community

Cover image for How to prevent data loss when a Postgres container is killed or shut down.
Joel Ndoh
Joel Ndoh

Posted on

How to prevent data loss when a Postgres container is killed or shut down.

I do not like to make my blog contents so long when I can easily go straight to the point.

The simple answer to this problem is simply docker volume

Why Docker Volume

The main concept behind using Docker volumes for backups is to create a copy of the data outside the container. Each time the container is restarted, the data is copied from the local system into the Docker container file system. This ensures that the data is always up-to-date and consistent with the latest changes.

How To Create Docker Volume for PostgreSQL

  1. To create a Docker volume for PostgreSQL, use the following command:
 docker volume create pgdata
Enter fullscreen mode Exit fullscreen mode

This will create a new Docker volume called pgdata that you can use to store the data for your PostgreSQL container.

  1. Start a new PostgreSQL container using the following command:
docker run -d --name postgresql -v pgdata:/var/lib/postgresql/data postgres:latest
Enter fullscreen mode Exit fullscreen mode

This command starts a new PostgreSQL container called postgresql and maps the pgdata volume to the /var/lib/postgresql/data directory in the container. This means that the data stored in the container will be stored in the pgdata volume, which is separate from the container and will persist even if the container is deleted or restarted.

If the container is deleted or restarted, you can simply start a new container and map the pgdata volume to the /var/lib/postgresql/data directory again, and your data will be restored. You do not need to create backups of your data or copy the data back into the container manually.

Also you can check the docs for more information on docker volumes.

I usually use docker volume for development, but I have not really considered its importance when it comes to production if I won't be using a postgres container. I don't know about you? Tell me what you think in the comment section.

Also, if you found this post helpful, please consider giving it a like or sharing it with others who may be facing a similar issue. Thanks for reading!

Top comments (4)

Collapse
 
ndohjapan profile image
Joel Ndoh

The fact that you have to insert data each time to restart the postgres container can be annoying.

Docker Volume is very important here.

Also while using a Kubernetes pod, you may find it very interesting to know that there is something called Persistent Volume (PV) and Persistent Volume Claim (PVC). They are for the same purpose just as docker compose.

The PVC-PV relationship is a way for users to request storage resources from the administrator and for Kubernetes to manage those resources in a way that is scalable, flexible, and reliable.

Collapse
 
sergiy2303 profile image
Sergiy

Why don't you use RDS cluster or something similar from the other cloud provider?

Collapse
 
ndohjapan profile image
Joel Ndoh

Yes it is always advisable to use Postgres Instance of cloud providers in production.

However in development, it is easier to use the containerized Postgres Instance. It makes it very easy for testing and setup in docker.

Collapse
 
sergiy2303 profile image
Sergiy

Thanks for sharing your knowlage. I prefer using containerized Postgres only locally or on CI/CD process for running tests. For non production environments like qa/staging I prefer using the same infrastructure as on production but with smaller scale. Rds databases on isntances t3.small are pretty cheap. That's makes us to get pre-deploy testing results close to production.