DEV Community

Discussion on: Docker everywhere is not a good thing.

Collapse
 
ohffs profile image
ohffs

We're in the process of moving over to docker swarm for our prod stuff - we've been using it for QA for about a year.

We mostly started using it as we have quite a few apps (say 30) and some of them are in PHP 5.3, PHP 7.2, Python 2.x, 3.x, golang, node vA-Z and it was becoming a real hassle to try and keep them updated and working with various OS releases. Also - we wanted to get away from our creaky VM infrastructure which had been sprouting VM's like crazy to avoid dealing with version conflicts etc.

To answer your specific questions - in regards to data we are fairly traditional - RDBMS with decent passwords and over our internal lan only, storage is to a local Minio cluster for newer apps and some are still using NFS (though we're looking to use rexray and see if that works out).

The DB set up we have ended up with is an overlay swarm network with mysql-router in it - all of the apps just talk to that as if it was 'the real' DB. The mysql-router takes care of passing the traffic out to an 'actually real' traditional mysql cluster.

We've only just really started looking at automatic container scanning - but compared to our old 'just install that then leave it running for five years' almost everything we do is an improvement!

This is pretty much all behind the corporate firewall - with just a few web services exposed via Traefik to the outside world, and we monitor those like we would any other internet-facing app - nothing special changed for us with it being in a container.

The performance seems fine without really doing any work so far - if anything it's better without the overhead of full VM's.