The surface area of software is often very complicated to comprehend, and a direct result of that is that we're often subjected to amateur discourse about how teams, or organisations should have worked on a thing, based purely on outside observation. It's almost always wrong, and I want to spend some time talking about how software is produced, and how it scales.
For people not close to code these examples can kind of look like similar situations with somehow opposite suggested "outcomes", so let's talk about the impact size has on code. We'll start with the most difficult thing for non-technical folks to get their heads around:
- The observed surface area of a system often has very little in common with its actual size, either in staff count, or lines of code.
The obvious example of this is Google and it's "one text box" UI, covering a massive indexing (+more) operation.
- More code is frequently not a better scenario to find yourself in. Code atrophies and rots over time naturally. Every line of code you write increases your total maintenance cost.
Doing things with less is most commonly a better outcome.
There's only a certain amount of people that can "fit around" codebases of certain sizes - once the size of your teams outgrows the code, work slows rather than speeds up, because the rates of conflicts, points of contention and hotspots, and coordination increases.
To fit more people around a codebase, systems are often decomposed to different libraries, subsystems or services. This expands the footprint of the teams you can surround your code with (to increase parallelism of work) but at least doubles the amount of coordination needed.
Microservice architectures largely help because they allow organisations to define boundaries around pieces of their software ("bounded contexts") and parallise work - at the cost of expensive coordination and runtime latency.
Equally, software is sometimes decomposed to help it scale in technical terms, rather than to match it's human needs (fitting more folks around a codebase) - this what most people often think of as "scalability" first but the former is more common.
Few have technical scaling requirements or limitations.
Evey subdivision of software tends to increase total complexity of comprehending a system - it makes it less likely anyone can keep it all in their heads, it increases possible failure points (both technical and human) and increases the total cost of ownership of code.
Why? Simple - each time you divide up your software, you have to also create new supporting structures (teams, work tracking, CI+CD pipelines, infrastructure) in order to allow the now bigger teams to be self sufficient and have enough autonomy.
Lots of software does this too readily (see the knee jerk reactions in the form of monorepos and back-to-monolith trends) - but dividing your software is good, but requires thoughtfulness and intention to accept the cost.
This isn't news, @martinfowler was talking about monolith-first designs about a decade ago. I like to think of it as "fix things that hurt". A lot of backlash against microservice arch. is really just folks getting lost in premature complexity too soon.
In both kinds of scale - human and technical, you should build for a reasonable amount of growth. Some folks say an order of magnitude, others say 10x traffic, but it should only be one big leap.
Don't copy what big tech do before you are big tech.
- The important rule is this - in modern software, the best software is software that is malleable, software that can be added to without buckling, with foundations made for growth later, but not now.
This is why basically all layperson takes on software are wrong.
Folks look at software and presume they can predict its form from its external surface area (false), it's complexity from its ease of use (actually often the inverse), and fall into the mythical man month trap (RIP Fred!) of "if one woman can make a baby in 9 months, get 9 women!".
The size of code is a function of your team size, the subdivisions in your software and your cost and plans.
It's all a compromise. And it's never "just go faster" or "just spend more" - even if those things can sometimes help, they can just as often hinder and bury a project.
Making software, design and architecture "just the right size" is really very difficult. Many, many systems have fallen into the nanoservices/distributed monoliths trap, even more into premature internal package management hell.
Remember every subdivision of a system has a cost.