loading...
Cover image for 4 Ways Docker Changed the Way Software Engineers Work in Past Half Decade

4 Ways Docker Changed the Way Software Engineers Work in Past Half Decade

geshan profile image Geshan Manandhar Originally published at geshan.com.np ・4 min read

10 years back it was Git that transformed the way software engineers worked. Half a decade back it was Docker that brought the container to the masses. Before Docker, the container was like a sacred secret in companies like Google and Heroku. Docker is a software and a company too. It tried to build a broader ecosystem but Kubernetes stole the thunder along the way keeping swarm at bay. This post is not about how some Docker tools are not popular. It is about how Docker has changed the way we work in the past 5 years.

Whale Image on [Unsplash](https://unsplash.com/photos/PO0UHx-5mHo)Whale Image on Unsplash

TLDR;

With Docker, you ship the whole stack not only your code. Allocate minimum required resources to containers then scale them horizontally. With containers security generally already comes baked In. With Docker and Kubernetes you can get zero downtime and faster deployments leading to business profit.

Changed the ways

If you want to go to the technical details of what is a container and why to use docker Google it :).

Docker has also partially deprecated many of the configuration management tools.

This post is about how Docker has changed the way we work after its release in March 2013. Below are some reasons that helped advance our way of working:

Ship the whole stack, not just code

With containers and Docker in specific, you always ship the whole stack in each version. The whole image gets rebuilt every time. It includes the precise OS+version, specific version of the language. It also has the dependencies like the framework and other libraries (versions depend on how you handle it). It also includes the code you have written and this results in a significant advantage. The advantage is, if it built correctly on your machine, it will potentially build on the server too. As soon as it runs it is the same environment on dev, staging, testing and even on production.

It happens because you didn’t shop only the code, you shipped your code + vendor code + specific language version + precise OS version too.

Allocate only needed resources to the application and scale horizontally

With each Docker container, you can be specific about how much resources you want to allocate to that particular container. Using software like Kubernetes it becomes so much easier to scale your application. Under high load, the no. of containers can expand and with less load, it can shrink too. So with this mechanism, each container (or pod for Kubernetes) can be allocated the minimum resources and scaled horizontally as per need.

For example, a simple Node JS app container/pod can run with like 128 MB memory and 0.25 CPU. As and when load increases run 5 pods in place of 2.

This requires the application to be built with horizontal scalability in mind. That basically means storing no data on the file system. Treating containers like cattle, not pet helps scale horizontally. It also makes high availability of application an achievable task.

Security is baked in

Using a container, not virtual machine the attack surface is already decreased. Following container security best practices you can already improve your security measures. Of course, if you have holes in the application like SQL injection that is a different story. Still, with smaller and security-focused images like Alpine, it will be easier to get the basics right.

Security is always about keeping the attack surface small. With containers and Docker closing more doors become easier.

The container should have access to only what it needs. As the file system is temporary for containers it can be a security boon as well as a security auditing issue.

Deploy faster with zero downtime

Deploying Docker containers is always about shipping the whole stack. So the chances of one file not syncing or one server not getting the latest changes are not there. As a successful build of the image is always required to deploy it any problems will be found in the build process.

With software like Kubernetes and Helm, orchestrating and deploying containers become straightforward. With High Availability (HA) in place using proper load balancing, deployments can be zero downtime.

Easier and faster deployment equates to the ability to deploy smaller changes. Smaller changes done well can lead to a better response to market needs quickly. To sum up, use the right tools to deploy your containers in a way to use it for business advantage.

Conclusion

The past 5 years have seen rapid adoption of Docker. With tools like Kubernetes deploying and scaling applications has been much effortless than some years ago.

Don’t worship your virtual machines, use the resources it provides efficiently. Get on the Docker and containers train and reap its benefits.


Originally published at geshan.com.np.

Posted on Oct 15 '18 by:

geshan profile

Geshan Manandhar

@geshan

Sr. Software Engineer | Agile follower | Speaker | Google Developer Expert -- Blogging at Geshan.com.np.

Discussion

markdown guide
 

I think these objections really are based on strange premises.

Docker has become the deployment format for applications because packaging at distribution level sucks arse. This has nothing to do with messaging, or do I miss the point here?

The point with scalability is, that scalability becomes modular. Scaling only parts of the application is possible. So you could use your resources more efficient if you have to.

Security is baked in, which means, you could restrict privileges at the application level.
Second: after one year having patches for the whole spectre-meltdown crap, how is this "a virtual machine is more secure than x" an argument at all? Virtual machines weren't any better off. The only "secure" separation - if that makes any sense - is physical separation with only one application running.

The only thing I tend to agree is about 0 downtime deployments.

I'm going through last week's Stuff The Internet Says On Scalability :-)

Yeah, it's pretty cool.

 

What this completely ignores is the fact that without proper development of what you want to "contain" Docker, Kubernetes and the like will be useless.

I'd rather see an article leaving out the marketing blurbs and including the harsh reality.

To this day, even companies that provide "the cloud" or container services do not have a clear picture.

Containers do not solve anything, they are just a technical approach intended to get people thinking about what could be the right approach.

 

I can surely agree to disagree.

 

Not sure that git has transformed anything. Before git I used SVN and it also worked for me. Regarding Docker it's not a security tool at all as and zero downtime was always available with a couple of servers and load balancer. Strange article, but ok.

 

I have used CVS,SVN and git. If git hasn't transformed VCS what has then?

Yes ZDD was there since a decade but Docker and k8s made it easier.

 

This new technology released as open source by Amazon seems interesting: Firecracker micro VM

 

Thanks for your insights. I agree with some of it.

 

Just use a scalable sharted database 👌

 

It is like get on hype train or die under it.

Containers do indeed help though, the adoption can be seen everywhere: from application production to software developing tools and automation.

 

So is it a good thing or bad?