DEV Community

Cover image for Docker/DB - Isn't it better to set share directory between host and db container?

Posted on

Docker/DB - Isn't it better to set share directory between host and db container?

After development, I came up with an idea during application maintenance.

"Isn't it better to set share directory between host and db container?"

For example :


    image: mysql:8.0.29
    container_name: mysql
      MYSQL_DATABASE: myapp01
      MYSQL_USER: user
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
      - "3306:3306"
      - mysql-data:/var/lib/mysql
      - ./shared_db:/shared_db  # set share directory between host and db container
Enter fullscreen mode Exit fullscreen mode

The reason is After lunching the service, make it easy to import dump file in local environment from production environment.

For example, there are many case you want to test out with same environment data as production, because the error could not be reproduced in local environment, but in the specific occasion.

In this case, you have to import dump file which has been exported from production environment or periodical batch.

But, sometimes this task can be very annoying and complicated.

These day, It is common to allow user to use many variety of OS such as Mac, Windows or Linux without specifying machine for development.
Also, the virtual machine technology like docker helps us achieve it.

In this case, to execute import command in the container(MySQL, PostgresSQL or other DB), it is necessary to store dump file somewhere you can refer from the container.
(There might be better approach that specify dump file in the host, I couldn't find it.)

If you share the file between host and container, you need to prepare some protocols or use shared storage that both can refer.

Other approach is executing import command in the host, but shell in the host will drive you crazy.

Especially, PowerShell is special in many ways. For example you can't use the character "<" because it is the reserved word.
So, you have find the way to avoid this problem.
(When I solve the problem, it is necessary to store the file in the container to refer. It was a waste of time.)

Other way is using DB client application such as MySQL Workbench. But, there are differences in each OS, also operation is different even in same application. Then it is annoying to share this information with your team.

From the beginning, even thought we are using virtualization software such as Docker which let us ignore host environment of each developers, it is not a efficient way to have different operations depending on the OS of each developers.

Also, if you successfully execute import command, after you will be struggle with error related to character code.

I wrote about research including failed example.

MySQL/Windows/Docker - How to import a dump file to MySQL container

The clear solution is "Set share directory between host and db container" which I suggested in the beginning.

This makes your explanation clearly and shorter when you need to teach developers how to import dump file in their local environment.
You can only tell them "Store the file in the shared directory between the DB container and the host, and execute the import command from inside the container."

I believe this is the best way. It doesn't get affected by shell differences or host OS.

In addition, no matter how much developers edit docker-compose.yml ,it affects only their local environment.
It can be resolved without any negative impact on the production or staging environment.

However, I have never seen anyone claiming something like mine, and if I put the above setting in docker-compose.yml, I would be asked, "What is this setting for?" ,
so I decided to post it on this blog because I wanted to explain how it is useful and how much problems I faced when I didn't include these settings.

Top comments (0)