Used to do DevOps before they even called it that way: Linux. Python. Perl. Java. Docker. For fun and profit. CTO level generalist working for a mid-sized tech-centric company.
Dresden, Germany
Think of "containers" for shipping goods: If you don't have those, you need to care about what you are supposed to ship. Technical devices will need to be handled in a different manner than, say, books or food.
With "containers", this all pretty much disappears. You do have a standard sized box that can extremely easy be stacked, carried around, liffted on a truck, shipped all around the world - and only the one filling the container and the one unpacking actually need to care what's inside.
With software containers, things are the same. Running a Java application is considerably different than in example running a node.js or Ruby On Rails server. Likewise, a RedHat Linux server system is subtly different to an Ubuntu or Debian server installation. There are a bunch of things an application (directly or indirectly) depends upon, and these things lead to an almost traditional "clash" between developers (who build code) and operations teams (who maintain running code in production systems): Application crashes on a production server, developer says "works on my system" - and everyone is having a hard time figuring out what exactly went wrong.
Enter containers: They try to establish a standardized way of how to packaging an application, including most (best case: all) required dependencies and make sure the operation team has a set of limited functionality (start a container, stop a container, update the container software to a newer version) required to fully operate an application without having to bother much about what technology is being used to build this application or which operating system the developer used for building it.
So from that point of view, containers add a bit more standardization compared to running a virtual machine - and make this process actually usable: You could do the same with a VM indeed, but to achieve the same thing containers can achieve, you would be required not to provide your operations people with an application to run on a VM but instead completely build and configure a VM template they can just import into whatever environment they use (VMWare, ...) and start it without second thought.
There's a load more to containers of course, but that should be the essence I guess...
Used to do DevOps before they even called it that way: Linux. Python. Perl. Java. Docker. For fun and profit. CTO level generalist working for a mid-sized tech-centric company.
Dresden, Germany
You're welcome. :) I'd recommend to have a look at Docker and make your way through the digitalocean tutorial on that (digitalocean.com/community/tutoria...) which is extremely concise and should contain all to get you started. Good luck and feel free to ask... ;)
This is what we use to call "application containers" that run a minimal environment tailored to support a single application (say Nginx or your bug tracker that connects to an external database). There is another (somewhat less common) type of container that runs almost an entire operating system and behaves like a VM, but instead of having virtualized hardware running a full OS, it shares the kernel with the host machine. This way you can have, say, a Fedora server environment on an Ubuntu machine without the overhead of a VM. On one of my previous jobs, I ran the latest Ubuntu LTS inside a container on a recent non-LTS Ubuntu workstation.
Sometimes the host OS exposes a "fake" kernel to the container, allowing it to behave as a different OS. This is how Solaris machines used to host Linux containers (not sure they still do it).
Used to do DevOps before they even called it that way: Linux. Python. Perl. Java. Docker. For fun and profit. CTO level generalist working for a mid-sized tech-centric company.
Dresden, Germany
Well yes you are right. There are a load of different things and understandings of containers. Personally, from quite an abstract level I tend to understand containers as a mere "structural interface" between operations and development that needs to be just as flexible as the environment itself is heterogenous. Back in Java EE days we used to have "all Java EE 1.6", devs were using a Glassfish application server locally and the same was running on the server. In such an environment, the Java EE application packages (such as .war or .ear files) do pretty well as containers. This doesn't work anymore, however, as soon as you got to deal with node.js or Python in your development and deployment tool chain. If you need this, your interface between development and operations needs to be capable of managing this. Recent container technologies seem pretty good at exactly that. ;)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Think of "containers" for shipping goods: If you don't have those, you need to care about what you are supposed to ship. Technical devices will need to be handled in a different manner than, say, books or food.
With "containers", this all pretty much disappears. You do have a standard sized box that can extremely easy be stacked, carried around, liffted on a truck, shipped all around the world - and only the one filling the container and the one unpacking actually need to care what's inside.
With software containers, things are the same. Running a Java application is considerably different than in example running a node.js or Ruby On Rails server. Likewise, a RedHat Linux server system is subtly different to an Ubuntu or Debian server installation. There are a bunch of things an application (directly or indirectly) depends upon, and these things lead to an almost traditional "clash" between developers (who build code) and operations teams (who maintain running code in production systems): Application crashes on a production server, developer says "works on my system" - and everyone is having a hard time figuring out what exactly went wrong.
Enter containers: They try to establish a standardized way of how to packaging an application, including most (best case: all) required dependencies and make sure the operation team has a set of limited functionality (start a container, stop a container, update the container software to a newer version) required to fully operate an application without having to bother much about what technology is being used to build this application or which operating system the developer used for building it.
So from that point of view, containers add a bit more standardization compared to running a virtual machine - and make this process actually usable: You could do the same with a VM indeed, but to achieve the same thing containers can achieve, you would be required not to provide your operations people with an application to run on a VM but instead completely build and configure a VM template they can just import into whatever environment they use (VMWare, ...) and start it without second thought.
There's a load more to containers of course, but that should be the essence I guess...
Man this is amazing!! Thank you so much for taking time to reply.. I think you killed it!
Do you prefer any containers provider to start with? Learning purposes for now..
You're welcome. :) I'd recommend to have a look at Docker and make your way through the digitalocean tutorial on that (digitalocean.com/community/tutoria...) which is extremely concise and should contain all to get you started. Good luck and feel free to ask... ;)
This is what we use to call "application containers" that run a minimal environment tailored to support a single application (say Nginx or your bug tracker that connects to an external database). There is another (somewhat less common) type of container that runs almost an entire operating system and behaves like a VM, but instead of having virtualized hardware running a full OS, it shares the kernel with the host machine. This way you can have, say, a Fedora server environment on an Ubuntu machine without the overhead of a VM. On one of my previous jobs, I ran the latest Ubuntu LTS inside a container on a recent non-LTS Ubuntu workstation.
Sometimes the host OS exposes a "fake" kernel to the container, allowing it to behave as a different OS. This is how Solaris machines used to host Linux containers (not sure they still do it).
Well yes you are right. There are a load of different things and understandings of containers. Personally, from quite an abstract level I tend to understand containers as a mere "structural interface" between operations and development that needs to be just as flexible as the environment itself is heterogenous. Back in Java EE days we used to have "all Java EE 1.6", devs were using a Glassfish application server locally and the same was running on the server. In such an environment, the Java EE application packages (such as .war or .ear files) do pretty well as containers. This doesn't work anymore, however, as soon as you got to deal with node.js or Python in your development and deployment tool chain. If you need this, your interface between development and operations needs to be capable of managing this. Recent container technologies seem pretty good at exactly that. ;)