In the Clean Agile book, Robert C. Martin wrote:
“It is the single greatest indictment of the software industry, the most obvious evidence of our failure as professionals, that we make things worse with time. The fact that we developers expect our systems to get messier, cruftier, and more brittle and fragile with time is, perhaps, the most irresponsible attitude possible.”
But is it really?
Don’t get me wrong, some developers tend to think that it is the nature of projects’ lifecycle to become worse over time, which is very dangerous. So, if the future seems to be so fatalistic, what’s the point in trying, right? I don’t need to explain to which pathologies it leads.
Nevertheless, I want to believe that in the majority of cases it is not so obvious that the developers are to blame and putting all the responsibility on them could be deeply unfair.
Let’s start with Murphy's Law: “Anything that can go wrong will go wrong.” Or to paraphrase: shit happens. No matter what you do: cry, scream or have a tantrum, you are sometimes powerless against the odds. You can try with all your might, but some things won’t work with you. It’s not a matter of opinion, it is just stating a fact. It’s a rule of thumb based on hundreds of developers’ experience.
Do you know The Harvard Law of Animal Behavior? It states “Under controlled experimental conditions of temperature, time, lighting, feeding, and training, the organism will behave as it damn well pleases.” Perhaps it’s not limited to the living organism only?
It will not come as a surprise that the more complex system is, the easier it is for unpredictable behaviors. The Cyclomatic Complexity increases over time together with the number of new, often thoughtless, functionalities. Hence, the number of cases to cover by automated tests increases as well. It quickly turns out that this number exceeds the team’s capabilities. In extreme cases, for most complex systems, the truth is they work in constant failure mode. The situation happens so often that new terms were coined. You can read e.g. about Crash-only Systems in another of my articles. There’s also a nice article about Chaos Engineering.
However, the project problems aren’t often purely technical. “The major problems of our work are not so much technological as sociological in nature.” [Peopleware: Productive Projects and Teams.] Talking with various stakeholders I repeatedly say - from the technical point of view almost anything is possible. The problem often touches different aspects, and one of them is management quality within a team, a project and a company. A great example has been described in “Extreme Ownership, How U.S. Navy SEALs Lead and Win” by Leif Babin and Jocko Willink: there were two teams, one the best and the other the worst, they were having a race in pontoons, in the same unfavorable conditions - cold, wet, with extreme physical exhaustion. It was enough to switch the leaders of the teams in order to make the worst one the best, and the best one the worst. It’s a true story. The same team, with the same budget, with the same goals and project scope, within the same time, will provide completely different results depending on the quality of management. Having said that, how often is the quality of management taken into account when making technical decisions?
So, why do projects become worse over time? Is it really due to developers’ lack of professionalism? Or maybe, it’s everything else, their surroundings which force them to make questionable compromises, with themselves, with projects, with team mates, or managers? Are we coming back to the nature vs nurture discussion in the context of developers? Or are we, as human beings, just not able to evaluate complex systems due to limited cognitive capacity? It’s easy to blame one group of people, but there are just too many factors to make such a simplified reduction.
Top comments (0)