Usually I use the alpine version of the Node image. It still includes the basic components that are required for most projects. It saves you a lot of space and your image is smaller.
I like to keep my images small, because I store them online in my image registry. Whenever a node package doesnβt work and is definitely required I try out another node image. Worst option is the full node image. With this option the size of my docker images can be up to 1gb..
Does smaller image means faster resource handling in this case? I had a problem using alpine before (can't exactly remember what it is, sorry) and had to switch to ubuntu based image. It can handle resource as fast as alpine except that the image size is larger, compatibility wise ubuntu based image is much better. So to me, I prefer higher compatibility over image size.
The cleanest approach would be a docker multi stage build. You can take whatever image you need for the build, then build the application in it. Afterwards you create your running image with an alpine version and take the deployment artifact (hopefully a single js file) from the building container and just run it.
"I prefer higher compatibility over image size."
-> higher compatibility would mean a bigger security attack surface. With multi stage you can have the best out of both worlds :)
Usually I use the alpine version of the Node image. It still includes the basic components that are required for most projects. It saves you a lot of space and your image is smaller.
Yes that is true but some packages like bcrypt which are built by node-gyp from source causes problems in alpine
Being a docker newbie myself, would it make sense to start off with an alpine version and only use only if there is a problem?
Or should I stick with a full node image initially?
I like to keep my images small, because I store them online in my image registry. Whenever a node package doesnβt work and is definitely required I try out another node image. Worst option is the full node image. With this option the size of my docker images can be up to 1gb..
Been wondering why some of my images are gigantic!... I see that a layer choice makes a huge difference in terms of the result image size.
Thank you, Yanik
Does smaller image means faster resource handling in this case? I had a problem using alpine before (can't exactly remember what it is, sorry) and had to switch to ubuntu based image. It can handle resource as fast as alpine except that the image size is larger, compatibility wise ubuntu based image is much better. So to me, I prefer higher compatibility over image size.
The cleanest approach would be a docker multi stage build. You can take whatever image you need for the build, then build the application in it. Afterwards you create your running image with an alpine version and take the deployment artifact (hopefully a single js file) from the building container and just run it.
"I prefer higher compatibility over image size."
-> higher compatibility would mean a bigger security attack surface. With multi stage you can have the best out of both worlds :)
docs.docker.com/develop/develop-im...