How big are microservices? They are small. How do you even messure it? lines of code? needed memory? amount of data processed? complexity(of any kind)?
Does running an app in docker makes it a micro service architecture? and how big would you say is a cloud function in comparison?
and I would also like to see some wrong and fun answers.
Oldest comments (7)
Let me be the first to say: it depends 😆
(I'll leave the adding of further detail to other commenters 😌)
They should be measured in terms of what they actually do.
The general idea behind a µservice architecture is not all that different from the core concepts of the UNIX philosophy, or the general idea behind a µkernel architecture. Namely: Each piece should do one thing very well.
The idea here is that by keeping each component small and isolated, the impact of failure of a component will be more localized and usually easier to recover from, and it also makes it easier to test and to expand upon.
I would group them logically.
If you have the feeling something in your application always goes hand in hand, you can group it in a microservice. Sometimes this is obvious, like with authentication or monitoring, but sometimes your application grows through iterations, and one day you see something worth extracting.
Sure, that doesn't help too much when you want to start your application as microservices.
You could try mapping it before you start; sometimes, this gives at least an idea bout the stuff you could outsource.
thanks, so do you think it is common to see one or two service over iterations/time to grow in complexity?
and it is a mindset and the microservice architecture to extract individual functionality?
I think, most people start with a monolith, so they only have one service. If you add features to a monolithic system, that one service will naturally grow in complexity.
The goal with microservices is two-fold.
Get your complexity under control. If you have clearly defined interfaces between two systems its easier to manage each one of them with a dedicated team than when they are intermingled in a big monolith that has implicit interfaces.
Get rid of undifferentiated work. If you can, for example, extract your authentication and let it be done by a SaaS company, you can safe money and time.
They should be pretty big. So big, that when your manager asks:
then your only true answer will be:
I've found most examples given on the internet tend to go even more micro than I had been initially thinking when researching the topic. Meaning.. there is "micro" in my mind and then I read an article and realise what they have built is seemingly "nano" to me.
Overtime and through working on breaking down a monolith first hand I generally found pulling apart the general features into containers first helped. Then maybe abstracting chunks of common usage into services. It was hard to pick out even with a monolith right there the general services.
Sure payments, authentication etc sound generic enough but I actually found there were much lower hanging fruit to go for first. Key indicators were areas which continually needing updating and full deployments felt heavy. Other areas of complex or high risk code that had become hard to debug also soon found themselves split out.
Are these "services"? Unsure. But I've definitely felt the benefits and now I can refactor again with much greater ease.