DEV Community

Cover image for Is Uncle Bob serious?
Blaine Osepchuk
Blaine Osepchuk

Posted on • Updated on • Originally published at smallbusinessprogramming.com

Is Uncle Bob serious?

Robert C. Martin (Uncle Bob) has been banging on the "software professionalism" drum for years and I've been nodding my head every with every beat. As a profession, I believe software developers need to up their game big time. However, I've become concerned about Uncle Bob's approach. I reached my breaking point the other day when I read his blog post titled Tools are not the Answer.

He took issue with a recent The Atlantic article: The Coming Software Apocalypse. Let me see if I can summarize the theses of these two articles.

The Atlantic:

We are writing more and more software for safety-critical applications and the software has become so complex that programmers are unable to exhaustively test or comprehend all the possible inputs, states, and interactions that the software can experience. We are attempting to build systems that are beyond our ability to intellectually manage.

We need new ways of helping software developers write software that functions correctly (and is safe) in the face of all this complexity. The current methods of producing safety-critical software are especially dangerous to society because when software contains defects we can't observe them in the same way we can observe that a tire is flat--they're invisible.

Uncle Bob:

The cause:

  1. Too many programmer (sic) take sloppy short-cuts under schedule pressure.
  2. Too many other programmers think it’s fine, and provide cover.

And the obvious solution:

  1. Raise the level of software discipline and professionalism.
  2. Never make excuses for sloppy work.

Does Uncle Bob's argument even pass the sniff test?

Safety-critical software systems, which are the topic of the Atlantic article, are held to shockingly high quality standards. The kind of requirements analysis, planning, design, coding, testing, documentation, verification, and regulatory compliance that goes into these systems is miles beyond what any normal organization would consider for an e-commerce website or mobile app, for example.

Read They Write the Right Stuff and tell me if you think Uncle Bob's on the right track (note this article was written 21 years ago and the state-of-the-art has advanced significantly). Does it sound like the NASA programmers just need more discipline and professionalism coupled with never making excuses for sloppy work?

What does an expert in safety-critical systems from MIT have to say?

Dr. Nancy Leveson was quoted several times in the Atlantic article but Uncle Bob completely ignored those parts.

So let's review an excerpt from one of her talks:

I've been doing this for thirty-six years. I've read hundreds of accident reports and many of them have software in them. And every someone (sic) that software was related, it was a requirements problem. It was not a coding problem. So that's the first really important thing. Everybody's working on coding and testing and they're not working on the requirements, which is the problem. (emphasis added)

She can't say it much clearer than that. Did I mention that she's an expert? Did I mention that she works on all kinds of important projects, including classified government weapons programs?

How about Dr. John C. Knight?

In his paper Safety Critical Systems: Challenges and Directions, Dr. Knight describes many challenges of building safety-critical systems but developer discipline and professionalism are not among them. This is as close as he gets:

Development time and effort for safety-critical systems are so extreme with present technology that building the systems that will be demanded in the future will not be possible in many cases. Any new software technology in this field must address both the cost and time issues. The challenge here is daunting because a reduction of a few percent is not going to make much of an impact. Something like an order of magnitude is required.

Developing safety-critical systems is extremely slow, which adds to cost. But QA practices virtually ensure delivered software functions as specified in the requirements. Uncle Bob could possibly argue that some projects are slow because the developers on those projects are undisciplined and unprofessional. But a claim like that requires evidence and Uncle Bob offers none.

Yes, tools are part of the answer (but not the whole answer)

My goodness, we need more and better tools. When I first started programming, I started with a text editor with basic syntax highlighting, that's it. I used to FTP into the production server to upload my code and run it; I didn't have a development environment.

Better tools have helped me become a better programmer

Later I moved to Eclipse and thought I was stupid for not doing this sooner. Eclipse caught all kinds of errors I missed with the basic text editor. It just highlighted them like a misspelled word in a word processor--brilliant.

A couple of years later I adopted Subversion as my VCS and I thought I was stupid for not doing this sooner. I could see all the history for my project, I could make changes and revert them. It was awesome.

Ditto for:

  • code reviews/pull requests/Jira
  • advanced IDEs with integrated static analysis, automated refactoring tools, automatic code formatting, and unit tests that run at the push of a button
  • GIT/bitbucket/GitHub
  • TDD
  • property-based testing (QuickCheck family)
  • virtual machines
  • frameworks
  • open source libraries

It's been nearly twenty years since I started programming and my tools have changed significantly in that time. I can only imagine how the tools that become available in the next twenty years will change how we write and deliver code.

Let's look at some possibilities.

Better static analyzers

My static analyzers still don't understand my code and can only pick up simple mistakes. They flag tons of false positives. They can be slow on large code bases. And I'd love it if I just have one static analyzer that did everything I wanted instead of 4-5. It's also time consuming to write custom rules. There's plenty of room for improvement there.

Correct by construction techniques

Then there are "correct by construction" techniques. I watched this video. He had me at "a provable absence of runtime errors". So I got a book on SPARK (a subset of Ada) and started learning. Wow, you might be able to write highly reliable and correct software in Spark but it's going to be a slow process (aka expensive).

Is this the future? I don't know but maybe if it was easier to program in Spark it might have a better chance in safety-critical software circles. It would also be interesting if someone developed formal method capabilities for my favorite programming language that were accurate and easy to use. "No need to write tests for this module, the prover says it's mathematically sound," yes please.

October 23, 2020 update:
I recently programmed a sumobot in Ada/SPARK to help me get a feel for the languages. And I think you'll either love Ada/SPARK or hate it. If you love the speed and flexibility of python, you'll probably hate Ada/SPARK. But if you care about low defect rates and high quality, you'll love the features of Ada/SPARK that enable you to achieve those goals.

Software to track each requirement to the code that implements it and the tests that prove that it was implemented correctly

I watched a video where the presenter was talking about the difficulty her team has with tracking thousands of requirements to specific code and test cases and back for regulatory compliance purposes in safety-critical systems. The task became much more difficult as they tried to keep everything in sync while the requirements, tests, and code changed as the project progressed. That team and every team like them needs better tools. And, eventually, I'd love to see that kind of thing built into the IDE for my favorite programming language, if it was easy to use.

Formal specification languages/model checkers

Then there are formal specification languages to consider. The Atlantic article mentions TLA+ but there are others. Now imagine that these languages were easy to use. Imagine that you had a tool that could help you construct a formal specification in an iterative way where it coached you to along to make sure you covered every case. And when you were done, you could get it to generate some or all of the code for you. Plus, if you got stuck you could just find the answer on StackOverflow. Cool? Hell, yes!

And more...

I'm sure we can brainstorm dozens of new or improved tools in the comments that would help us write better, more correct code at a lower cost.

Why increased discipline and professionalism are not the answer

The fundamental problem is that even the brightest among us don't have the intellectual capacity to understand and reason about all the things that could happen in the complex interacting systems we are trying to build. It's not an issue of discipline or professionalism. These system can express emergent behavior or behave correctly but in ways unforeseeable by their designers.

That's why Dr. Leveson's book is so important. Instead of trying to figure out all those states and behaviors we "just" have to specify the states and behaviors that are not safe and prevent the software from getting into those states. Well, it's more complicated than that but that's a part of it.

Conclusion

I'm all for increasing software professionalism and discipline but Uncle Bob's wrong about how to prevent "The Coming Software Apocalypse" in safety-critical software systems. Experts in the field don't rank programmer professionalism and discipline anywhere near the top of their priorities for preventing loses.

More programmer discipline and professionalism can't hurt but we also need ways of taming complexity, better tools, ways to increase our productivity, ways to reason about emergent behavior, research on what actually works for developing safety-critical software systems, new and better techniques for all aspects of the software development process, especially better ways of getting the requirements right, and so much more.

I know there are tons of programmers churning out low-quality code. But organizations building safety-critical systems have processes in place to prevent the vast majority of that code from making it into their systems. So if the software apocalypse comes to pass you can be pretty sure it won't be because some programmer thought he could take a short-cut and get away with it.

What do you think? Agree or disagree? I'd love to hear your thoughts.

Additional resources

Blog post: Safety-Critical Software: 15 things every developer should know

Here's a video of Uncle Bob's software professionalism talk: https://youtu.be/BSaAMQVq01E

Nancy Leveson's book Engineering a Safer World is so important that she released it in its entirety for free: https://www.dropbox.com/s/dwl3782mc6fcjih/8179.pdf?dl=0

Excellent video on safety-critical systems: https://youtu.be/E0igfLcilSk

Excellent video on "correct by construction" techniques: https://youtu.be/03mUs5NlT6U

Latest comments (59)

Collapse
 
qxfs7ttywm profile image
William of Ockham

You know... I usually try to stick to reading and avoid writing and commenting. Mostly due to the fact that I'm an old grumpy fa... man, who was raised in the spirit of speaking straight, honest and clear. But nowadays if you don't coat your words in a thick treacly layer of sugar you immediately get branded as rude and toxic. But please, bear with me, I'll do my best to keep my (and yours) sanity without giving us both a sugar-induced heart attack. So, as I was saying... I usually try to stick to reading and avoid writing and commenting. But reading this - is one of those things that I can't just walk past by without commenting.

The cause:
Too many programmer (sic) take sloppy short-cuts under schedule pressure.
Too many other programmers think it’s fine, and provide cover.

And the obvious solution:
Raise the level of software discipline and professionalism.
Never make excuses for sloppy work.

...if this would be said by a child, or by someone who lives in a world of pink ponies and unicorns, I would just shrug and move on, but... You see, the thing is, Reality has it's own ideas on this topic.

Leaving aside the fact that Bob provides zero evidence to back up his words, other people said enough on this topic already, we'll start with definitions.

Solution.

A solution to a problem or difficult situation is a way of dealing with it so that the difficulty is removed.
A way to solve a problem.

Excuse.

A reason or explanation given to justify a fault or offence.
A reason that you give to explain why you did something wrong.
An excuse is a reason which you give in order to explain why something has been done or has not been done.

Now, it's time to state a simple, obvious, and unavoidable fact: perfect doesn't exist. There is no Perfect Software Engineer. There is no such thing, and never will be. Only dead people don't make mistakes, and even that can be questioned. The first "obvious" solution implies that the level of software discipline and professionalism is not high enough. "High enough" would be "perfect". But "perfect" is unachievable. Which means that raising the level of software discipline and professionalism will not solve the problem. No matter how high you raise that level, the difficulty will not be removed. Therefore it can not be called a solution. It is a measure to minimise the change of negative outcome at best, but not a solution.

This is simply not solvable. Claiming that you have a solution to a problem that is not possible to solve - is... well... "overconfidence" would be the most neutral term, though not as powerful as I would like to emphasise.

I know that naming things is hard but cooome ooon!.. One would expect this man to know the importance of proper terminology by now, but, what a surprise! (it's not)

Now as for excuses...

Never make excuses for sloppy work.

That is one pretty... weird, to say the least, "solution".

Boeing didn't conduct end-to-end tests on Starliner before its failed flight. Now, is that an excuse? Or explanation?

Doing a single test run from launch to docking takes over 25 hours, after all.

Boeing also didn't test the Starliner's software against its service module. Boeing scheduled the spacecraft's software test and a "hot fire" test of the module's thrusters at the same time. That's why the service module was in a different location, and the company had to use an emulator in its place.

Is this an explanation? Or excuse? It seems that this is an explanation with reason - which means this is an excuse. But how do you give explanation without reason? Moreover, what is the value of explanation of a problem, that does not include reason?..

"Well, we have failed" - that's an explanation. What is the value of that explanation? Close to zero, obviously. We can see that you've failed, what else is new?
"We have failed because A and B" - and this is also an explanation. With reason. It is useful, but it automatically becomes an excuse.

Alright, let's assume we decide to follow Bob's suggestion and stop giving the excuses (read: explanations of mishaps that include reasons).

Will this behaviour be of any use for anyone? I highly doubt that. An information about event without reasons that lead to that event provides little to no value to those who would like to avoid such event in the future.

Will this stop bad things from happening? Nope, it sure won't. Why? Go back to the part about "perfect". It's not a question "will it fail?". Because it will. Eventually. The real question is "when?" and "how do we handle it when it fails?"

So in the end we have:

  • an overconfident claim that contradicts reality and does not solve anything
  • a useless, if not harmful, advice, that will only make things worse, not better, and does not solve anything either.

What does that makes of Bob? Well, it's up to you to decide. But in my eyes, that person's reputation has hit... well, maybe not rock bottom (yet), but pretty darn close.

PS. And getting back to reality.

Too many programmer (sic) take sloppy short-cuts under schedule pressure.

It's not possible to achieve "perfect", but it is possible to get close to it, and the more time and effort you invest, the closer you get. Closest term would be Asymptote - you can try to get close to 1, but you will never be able to reach it. This is where reality kicks in: we are always under time pressure. At the moment of writing, human life time span is limited - so here's the time pressure of how long you live. Then there's the business time pressure - if you know someone who is ready to pay you a fine salary for a lifetime while the only thing you do is polish your code - do let know! In reality, whatever we do - have time constraints. And very rarely, if ever, we are in control of those. Which means we always take short-cuts, because we are under permanent schedule pressure. And this is also not avoidable. This is a reality. You can try fighting the reality, of course, it will not fight back. It will patiently observe you perish and move on, and that's about all the effect you'll get.

A man who fights the reality... Remember what I said about rock bottom? Yeah...

And that's that.

Collapse
 
mrbandit profile image
mr-bandit

A quick note on Uncle Bob and women.
I went to his blog and read everything I could find on what he says about women.
Every thing he says is positive about women in the programming field. He points out we need more women in the field. When he started, the ratio was close to 1:1. Now it is heavily weighted towards men, and he is very concerned.
Has he made mistakes with jokes at the expense of women? Yes, and he fully accepts responsibility and has apologized. Quite frankly, it would be difficult to find a man who has not. We men, myself included, still have a way to go in how we show respect to women.

I know I have been pushing robotics in this discussion as a vector for kids into STEM. The thing is. if you goto a robotics competition, you will find boys and girls are about 50/50. There is pretty good research that shows the need to get girls into STEM at the 4th grade or the window closes quickly.
If you watch the Hidden Figures movie, it is clear men built the hardware, but women got us to the moon. The leader of the computer in the LEM was a woman, who pioneered fundamental principles of CS. That system was fundamental to the mission success.
Many of us have daughters or nieces. We need to make sure we support them to be strong and be bold. Remind them boys are stupid about girls, and not be discouraged. We need to teach our sons to respect girls.
My family marries and spawns strong women, and we are stronger for it.
As an industry, we cannot afford to ignore half of the brain power in society. As a society, it is foolish to mistreat and belittle half the population. We need to be better.

Collapse
 
mrbandit profile image
mr-bandit

I do mission critical embedded systems in C. I follow the SE INCOSE process if one is not mandated eg DO-178 or 60601.
We do need good tools, from languages to requirements. Problem is the requirements tools and testing tools are very expensive. The process also takes a long time.
Keep in mind the shuttle code is $1000/LOC.
The only reason I can use C is I follow the process and my code is very defensive. It is very simple, reads like a child primer. I take the text of the requirements and make them the comments.
We need tools that don't allow a class of bugs, like buffer overflow.

I have listened to uncle Bob, but I have not yet heard his scree on women, etc.
What I did get is he seemed to be talking to the programmers with less than 5 years experience. He makes the valid point that if the number of programmers double every 5 years. then half of all programmers have less than 5 years of experience. If that is his target audience for more rigor. I have to agree with him.
It is incumbent on senior developers to both follow good practices and to mentor younger developers. And management to support the developers with good tools and the resources to be able to do a good job. This includes realist schedules. Good, fast, cheap, choose any two. If you choose fast and cheap, you cannot expect to fix the code to be good. The process greatly increases the likelihood of good. Fast and good are mutually exclusive. Good and cheap are mutually exclusive.

I am looking at Rust. It is an example of a tool that doesn't allow a certain class of bugs. Any overhead can be handled by a faster mpu. Does that increase costs per unit? Probably. But pay me now or pay me later. A lot depends upon how much risk management is willing to accept.

Collapse
 
c5n8 profile image
c5n8

What uncle Bob fear is that when programmers who work in critical system make a mistake that costs lives of significant number of people, it can happen, and he believe it is bound to happen. Then people would create regulation for everyone who call themselves programmer, yes, that includes you too(pointing finger at html programmers), people won't see difference between programmer that work in airplane software with programmer who make marketing website for local business. What they know is all of that programmers working with weird text on computer. And they will even create institution to enforce it to you.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You've certainly captured the thrust of his argument. But Uncle Bob fails to disclose software that can cause widespread death and destruction is already regulated:

I'm all for more professionalism in our industry, but I find it hard to believe that there's going to be a disaster in a safety critical system (which was likely built and operated under some kind of existing regulation, by the way) and that the disaster is going to be so terrible that governments around the world are going to regulate how non-safety critical software is constructed.

After all, the Boeing 737 Max disasters that killed 346 people has barely caused a ripple in other divisions of Boeing, never mind the aviation industry in general.

Collapse
 
c5n8 profile image
c5n8

Let's just consider ourselves lucky so far, and prove him wrong by getting there first before the people force us to. It's not going to be easy and it will take very long time. Even uncle bob himself gave an example about TDD that he compared with double-entry bookkeeping which took about 500 years before everyone adopted it.

Collapse
 
begueradj profile image
Billal BEGUERADJ

I agree with some statements you wrote.
Thank you for sharing.

Collapse
 
douma profile image
douma

Quality code costs money. We either spend the money on code and avoid lawsuits or we spend it on the lawsuits. Now we are defending a corporation and its customers and its clients, there is only one option. Strictly agile speaking; tools are not the answer, it's humans. We can conquer complex systems with domain driven design, tdd and clean code.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I'm pretty sure the people who study this stuff for a living would disagree with you about DDD, TDD, and clean code being the only things required to build complex software that behaves correctly and safely. But thanks for taking the time to participate in this discussion.

Collapse
 
pierresassoulas profile image
Pierre Sassoulas

Blaine, I think your article is interesting, yet it is built on a straw man fallacy. Robert Martin is not saying that NASA is not (or was not) doing its job properly. The example he gives is :

one of the programmers had reused some code from a different platform and had not realized that it had a built-in 30 second truncation.

That is not something that can be fixed with a tool or by better requirements.

The example given in the article linked by Robert Martin is not about NASA. It's about the disastrous unintended acceleration in Toyota cars ("there were more than 10 million ways for key tasks on the on board computer to fail, potentially leading to unintended acceleration").

Once again this is a problem, you should be able to fix without adding "should not accelerate unintendedly" to the requirements.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks for reading and taking the time to comment, Pierre.

I think I made a pretty convincing case that software complexity in safety critical systems can exceed our ability to comprehend it.

I quoted a safety critical systems expert with a PhD who said the same thing (two experts actually). And I quoted a section of Mr Martin's blog post where he lays all the blame for the problems described in the Atlantic article on the programmers.

I don't believe I setup a straw man argument. It's a pretty simple disagreement but as you can see from the multitude of comments here that there are no simple solutions.

Cheers.

Collapse
 
stevezieglerva profile image
Steve Ziegler • Edited

Great post. This is unique content and much more stimulating that the typical "Java is dead!" post. I read it for the headline and was shocked to see a reference to Professor Knight, my CS340 professor at UVa! He was into formal specification methods based on math and set theory, and documented in mathematical formulas.

While I appreciated the accuracy of that approach in school, I realized its limitations in the real world. I work for a consulting firm, not with safety-critical systems, but with other important, mission-critical systems. I don't think most of the customers, developers, or testers can easily read or understand formal specifications. Even with the most accurate specification, it comes down to thorough testing and monitoring to know if it's met.

I believe in Humphrey's "Law" that customers don't know what they want until after the system is in production (maybe not even then). Complex problems aren't fully understood in the beginning on paper. Quick iterations and prototype testing in the wild often raise important, unforeseen issues hidden by complexity and false assumptions. I really love Henrik Kniberg's post on how his kids won a Lego robot competition by using an iterative design/build/test approach and going against common trends.

I think the right balance of upfront design, iterative testing, and software professionalism is required to make any system, especially safety-critical systems, work correctly.

Thanks for writing and keep it up!

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks for the kind words, Steve.

I totally agree with your comments.

I read an old engineering book a few years back and the author talked about how the Brits built the wings for a particular fighter aircraft with which the author was involved.

They needed the wings to be 'just strong enough'. Any extra material wasted preciously resources but it also required more fuel, reduced range, reduced speed, and reduced maneuverability. They didn't have a supercomputer to run a simulation. So they built the wings iteratively. They started with a wing they thought wasn't strong enough, turned it over and loaded it with sandbags until it broke. Then they reinforced the weak point and repeated until the wing was strong enough. Brilliant, right?

Have a good one.

Collapse
 
stevemushero profile image
Steve Mushero

Traditional safety-critical stuff such as avionics are indeed most excellent, but it seems we are sliding down a slope to mediocrity as the amount of software goes up many orders of magnitude, including self-driving cars, IoT of all kinds (including industrial), and much more.

There are not 1000x the skilled practitioners (nor investor patience) that we have historically had - something has to give, and I fear it's safety.

To me there is ample evidence in many of these areas that "programmers are generally undisciplined" and/or there is not time, patience, investment, nor willingness to really do it right - we'll see when we get 100+ auto-driving cars out there, drones over our heads, and everything connected to the Internet.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I couldn't agree more, Steve.

SpaceX is somehow using Linux for basically everything: in its rockets, the dragon capsule, launch control and monitoring. The linux kernel is not built to safety-critical standards so I'm not sure how they are getting away with it. NASA made a fuss about SpaceX's software development practices a couple of years ago and that all kind of faded away.

Here's a great talk about the concerns people have about using linux in safety-critical settings:

I believe software developers--more or less--deliver the software that their employers truly want (what they say they want is often different).

An employer may say they want secure software with low defect rates but they don't provide training, they don't implement the practices or use the tools that we know lead to better software, the requirements keep changing, the staff have questionable skill, they insist on an aggressive schedule, etc.

So, yeah, I think cars, IOT systems, and drones will kill people. Data breaches aren't going away either. There are only two things that I think might bend the curve here:

  • regulations with teeth and strong enforcement
  • software liability laws need to change
Collapse
 
bousquetn profile image
Nicolas Bousquet

Processes are able to detect some easy to spot defect like a software that fail to run its unit tests. It may enforce 100% code coverage, but it can't ensure that the unit test suite actually make sense.

Processes can ensure there a requirement associated to each line of code, but not that this association actually make sense or that the requirement is actually a great idea or not.

The processes and QA can only detect some type of problems and they do not provide any creativity or intelligence by themselve.

The humans behind, the individuals that do the work are doing that and different team will achieve different results. There failure in critical systems too and many companies applies all the recent methodologies and best practice fail to upgrade their legacy software that is now working fine and did for dozen years and that was build most often with far less tooling available.

The human factor is key in any project because humans are actually implementing the project. A mediocre team of developpers, managers, products owners and alike will fail more often, be slower and will produce software with more bugs, including bugs in critical systems. There no way around that.

You can given them the best tools in the world they fail to leverage them.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I agree. If you don't have quality people, the best tools and processes will not be enough to delivery outstanding software.

Collapse
 
bmoo profile image
Brad Moore

Your article, like so many other replies that I've read, curiously does not cite the Atlantic article. Which makes me wonder, "Did you read it"? Its easy to remove Uncle Bob's argument from its context and then knock it down, but its cheap and lazy.

Collapse
 
ratmice profile image
ratmice

Another good talk by Perdita Stevens.
youtube.com/watch?v=mx9eqyXrNAk

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks. I'll have to check it out.

Collapse
 
kevinrstone711 profile image
Kevin R Stone

Professionalism is a hard term to define, but the goal is high quality software.

High quality software is built when quality is valued over other things. This can happen both at the individual or institutional levels. A good professional developer can point out missed requirements, unhandled states. A good professional company can also catch these ahead of time with good process and by eliciting feedback from experts.

In Bob's article about tools, he mostly argues that while tools are great and can help us avoid mistakes, they are useless for people who don't care to use them correctly. At least that's the gist I get. If you value quality over other things, new tools can make you an even better developer, and possibly save you from disaster. If you don't, then you will likely either not use the new tools, or use them incorrectly.

Collapse
 
davidthewatson profile image
david watson

The solutions put forth by you and Nancy and Uncle Bob are not mutually exclusive, but they are missing an important enabler of bad software: management. At the end of the day, good requirements and professionalism take time, aka cost. The potential upside is quality. I say "potential" because the financial success of a thousand startups (despite the failure of 10x that number) has taught us that rewards accrue to those who frequently chose "right now" over "right" when asked about shipping - damn the QA. In that sense, durability requirements of "runs once" vs. "runs a thousand times without failure" are important and frequently difficult to reverse engineer if you got them wrong when you shipped the prototype. But those decisions still need to be made explicitly or they will emerge through bug reports much to the disdain of those management enablers. So that leaves us with requirements, professionalism, and management. Call it organizational dynamics - the single most important cultural attribute is what we call "safety": all else being equal, everyone on the team must feel safe to speak truth to power, because getting to the truth is really what engineering is all about.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You make very good points, David.

If management wants its devs to produce the cheapest, crappiest, most buggy software on the planet, it's probably next to impossible for individual devs or even a whole team of them to resist that pressure in a significant way and deliver quality product.

Collapse
 
mrbandit profile image
mr-bandit

The Boeing MAX 737 root cause was management. Not engineering.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.