DEV Community

nabbisen
nabbisen

Posted on • Updated on • Originally published at scqr.net

PostgreSQL 14: Database migration with pg_dump

Summary

PostgreSQL offers native ways to export database as backup from existing server and import to restore it to another one.

This post shows how to do it. It is really simple thanks to them.

It consists of a pair of key commands.

Export (backup)

pg_dump dbname > dumpfile
Enter fullscreen mode Exit fullscreen mode

Import (restoration)

psql dbname < dumpfile
Enter fullscreen mode Exit fullscreen mode

You must create database where you migrate to beforehand.

Environment


Tutorial

Create database on new server

Create user

The user name must be the same to the owner of existing database. It is shown as {DB_OWNER} below.

$ createuser -U {DB_ADMIN} --port {DB_PORT} --pwprompt {DB_OWNER}
Enter fullscreen mode Exit fullscreen mode

You will be asked of the password of the new user (role):

Enter password for new role: 
Enter it again: 
Enter fullscreen mode Exit fullscreen mode

Then the password of {DB_ADMIN} (the database administrator such as postgres) will be required:

Password:
Enter fullscreen mode Exit fullscreen mode

Create database

As to below, {DB_OWNER} is that you created above. {DB_NAME} is the very database you are migrating.
Run:

$ createdb -U {DB_ADMIN} --port {DB_PORT} --encoding=UTF8 --locale=C --template=template0 --owner={DB_OWNER} {DB_NAME}
Enter fullscreen mode Exit fullscreen mode

Then the password of {DB_ADMIN} will be required again:

Password:
Enter fullscreen mode Exit fullscreen mode

Now your new database is ready.

Migrate

Export

Get the data from {DB_NAME} on {OLD_SERVER} and write it in {EXPORT_FILE}.sql:

$ pg_dump -U {DB_OWNER} -h {OLD_SERVER} {DB_NAME} > {EXPORTED_FILE}.sql
Enter fullscreen mode Exit fullscreen mode

Next, I compressed the exported file and move it to the new server instead.

(Alternative) Direct import from the old server

Alternatively, you might import it to the new server directly with -h option:

$ psql -U {DB_OWNER} -h {NEW_SERVER} {DB_NAME} < {EXPORTED_FILE}.sql
Enter fullscreen mode Exit fullscreen mode

Transfer

When it is big, you had better compress it before moving it.

$ gzip {EXPORTED_FILE}.sql
Enter fullscreen mode Exit fullscreen mode

Then deliver it to the new server with scp or another.

$ scp {EXPORTED_FILE}.sql.gz new-server
Enter fullscreen mode Exit fullscreen mode

Transfer is successful ?
Enter the new server, and decompress it.

$ gunzip {EXPORTED_FILE}.sql.gz
Enter fullscreen mode Exit fullscreen mode

Import

It's the final step !! Supposed you are in the new server, run:

$ psql -U {DB_OWNER} -d {DB_NAME} -f {EXPORTED_FILE}.sql
Enter fullscreen mode Exit fullscreen mode

The password will be required:

Password for user {DB_OWNER}: 
Enter fullscreen mode Exit fullscreen mode

The output was in my case:

SET
(...)
 set_config 
------------

(1 row)

SET
CREATE FUNCTION
(...)
CREATE TABLE
CREATE SEQUENCE
ALTER TABLE
(...)
COPY 20
(...)
 setval 
--------
   6820
(1 row)
(...)

ALTER TABLE
(...)
Enter fullscreen mode Exit fullscreen mode

Reference

With many thanks to:


Happy storing 🕊

Latest comments (3)

Collapse
 
roshitgit profile image
ROSHIT RAJAN • Edited

Hi.
Thanks for the post. Will this approach work if the on-premise data is in terabyte.? Approx 1.5T
We are looking at migrating on-premise data to azure postgres running on a container on AKS. As data is huge, we are looking at a data migration strategy. Let me know if you can provide some insight.

Collapse
 
nabbisen profile image
nabbisen • Edited

Hi. Unfortunately, I have never tried it with such large database.

Well, this section in the official docs may help you:

Handling Large Databases

Some operating systems have maximum file size limits that cause problems when creating large pg_dump output files. Fortunately, pg_dump can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods:

Use compressed dumps. You can use your favorite compression program, for example gzip:
pg_dump dbname | gzip > filename.gz

Use split. The split command allows you to split the output into smaller files that are acceptable in size to the underlying file system. For example, to make 2 gigabyte chunks:
pg_dump dbname | split -b 2G - filename

Use pg_dump's custom dump format. If PostgreSQL was built on a system with the zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively. The following command dumps a database using the custom dump format:
pg_dump -Fc dbname > filename

Use pg_dump's parallel dump feature. To speed up the dump of a large database, you can use pg_dump's parallel mode. This will dump multiple tables at the same time. You can control the degree of parallelism with the -j parameter. Parallel dumps are only supported for the "directory" archive format.
pg_dump -j num -F d -f out.dir dbname

Collapse
 
roshitgit profile image
ROSHIT RAJAN

Thanks a lot. This is really helpful.