DEV Community

Cover image for Using Docker in development the right way

Using Docker in development the right way

Leandro Proença on June 29, 2021

If you are not proficient in Docker, or that topics like containers and virtual machines are still a bit "fuzzy", have problems working with Docker...
Collapse
 
itsraghz profile image
Raghavan alias Saravanan Muthu

Hi, Thanks for sharing the good insights with your experience on the most hottest topic in the industry these days! This page dev.to/leandronsp/thinking-like-co... throws a 404. Pls check that.

Collapse
 
leandronsp profile image
Leandro Proença

thanks for the feedback! just fixed it, cheers!

Collapse
 
itsraghz profile image
Raghavan alias Saravanan Muthu

Great, thank you for the quick action and confirmation.

Collapse
 
klvenky profile image
Venkatesh KL

That's an interesting perspective @leandronsp.
We've used docker primarily for maintaining the same database version across different engineers without any setup dilemma. That worked pretty well for us however recently started noticing too much friction to onboard a new engineer. So that looks pretty good for our case.
Can you share a sample on how you've achieved complete development in docker?

Collapse
 
gianiaz profile image
Giovanni Lenoci

We use docker + docker-compose in our team. (php + db + nginx)

We use makefile for automate setup and be up and running with a few commands.
No friction from new dev team members (maybe a little if you are starting from scratch with docker, but today is a required skill).

We start from a base dockerfile we extend for development (adding for example xdebug support for example).
We deploy a production image through k8s derived from the same base images.

We literally deploy the same local environment to prod.
For me is a life changing way to work.

Collapse
 
klvenky profile image
Venkatesh KL

Ohh great. I'm style little curious about a few things, ll. Mainly on this question. Does the dev server also run within docker? In real time? Don't mind if it's a bad question.
I'm very curious because, we have lots of micro services which holds their own micro front-end, which is handled by a top level front-end script.
However it has its own limitations as we won't be able to run multiple microservices together due to port issues in our scripts. So this would be an ideal solution for us where we'll have multiple docker containers running in isolation.

Please let me know if you've a working example. Thanks

Thread Thread
 
gianiaz profile image
Giovanni Lenoci • Edited

If you have many microfrontend in prod I assume each one will be accessible on a different hostname.

You can do the same on a local machine mapping local host in your /etc/hosts (or similar under windows), host 127.0.0.1 -> 127.0.0.254 will map to localhost:

127.0.0.2 microfrontend1.app
127.0.0.3 microfrontend2.app
Enter fullscreen mode Exit fullscreen mode

In your docker-compose file you can expose every microfrontend in this way:

node1:
  ports:
    - "127.0.0.2:80:8080" #

node2:
  ports:
    - "127.0.0.3:80:8080" #
Enter fullscreen mode Exit fullscreen mode

node1 will respond to microfrontend1.app in your local browser and node2 to microfrontend2.app

In this configuration you take rid of limits about port already used because there are 2 different hosts.

Hope this helps

Thread Thread
 
klvenky profile image
Venkatesh KL

That makes sense. Thanks a ton 👏

Thread Thread
 
klvenky profile image
Venkatesh KL

Only difference is that we're having a proxy service at the top level which hides all the micro apps from external world. So that's something that is have to consider while trying it out.
I'll give it a shot thanks for the motivation 👏

Collapse
 
higginsrob profile image
Rob Higgins

I've been working on my own version of docker dev environments in my very limited free time (there's no documentation): github.com/freshstacks/home. This is basically where I keep my evolving dev environment, with vim/tmux/vscode/zsh configuration and shortcuts baked in. This is the first time I've shared it. Notable lessons learned: 1) create a docker volume first, run all operations inside that volume (I use a custom git clone command that saves repos by owner/project inside the docker volume), dockerize services and force them to also run inside this volume. HUGE speed increase compared to host bind mount. 2) To use docker in docker for a non root user, attach the docker sock to a different file address (/var/run/host.docker.sock), then after you start the dev container you run a docker exec as root, using socat to copy the docker sock to the normal location with the "nonroot" users permissions. 3) You can use vscode to attach to the dev container, or when feeling old school fire up tmux and vim with all my plugins and config.

Collapse
 
klvenky profile image
Venkatesh KL

Let me check it out
Thanks

Collapse
 
lucasmacedodev profile image
Lucas Macedo

Great post. Thank you!