DEV Community

Cover image for Is Uncle Bob serious?
Blaine Osepchuk
Blaine Osepchuk

Posted on • Updated on • Originally published at smallbusinessprogramming.com

Is Uncle Bob serious?

Robert C. Martin (Uncle Bob) has been banging on the "software professionalism" drum for years and I've been nodding my head every with every beat. As a profession, I believe software developers need to up their game big time. However, I've become concerned about Uncle Bob's approach. I reached my breaking point the other day when I read his blog post titled Tools are not the Answer.

He took issue with a recent The Atlantic article: The Coming Software Apocalypse. Let me see if I can summarize the theses of these two articles.

The Atlantic:

We are writing more and more software for safety-critical applications and the software has become so complex that programmers are unable to exhaustively test or comprehend all the possible inputs, states, and interactions that the software can experience. We are attempting to build systems that are beyond our ability to intellectually manage.

We need new ways of helping software developers write software that functions correctly (and is safe) in the face of all this complexity. The current methods of producing safety-critical software are especially dangerous to society because when software contains defects we can't observe them in the same way we can observe that a tire is flat--they're invisible.

Uncle Bob:

The cause:

  1. Too many programmer (sic) take sloppy short-cuts under schedule pressure.
  2. Too many other programmers think it’s fine, and provide cover.

And the obvious solution:

  1. Raise the level of software discipline and professionalism.
  2. Never make excuses for sloppy work.

Does Uncle Bob's argument even pass the sniff test?

Safety-critical software systems, which are the topic of the Atlantic article, are held to shockingly high quality standards. The kind of requirements analysis, planning, design, coding, testing, documentation, verification, and regulatory compliance that goes into these systems is miles beyond what any normal organization would consider for an e-commerce website or mobile app, for example.

Read They Write the Right Stuff and tell me if you think Uncle Bob's on the right track (note this article was written 21 years ago and the state-of-the-art has advanced significantly). Does it sound like the NASA programmers just need more discipline and professionalism coupled with never making excuses for sloppy work?

What does an expert in safety-critical systems from MIT have to say?

Dr. Nancy Leveson was quoted several times in the Atlantic article but Uncle Bob completely ignored those parts.

So let's review an excerpt from one of her talks:

I've been doing this for thirty-six years. I've read hundreds of accident reports and many of them have software in them. And every someone (sic) that software was related, it was a requirements problem. It was not a coding problem. So that's the first really important thing. Everybody's working on coding and testing and they're not working on the requirements, which is the problem. (emphasis added)

She can't say it much clearer than that. Did I mention that she's an expert? Did I mention that she works on all kinds of important projects, including classified government weapons programs?

How about Dr. John C. Knight?

In his paper Safety Critical Systems: Challenges and Directions, Dr. Knight describes many challenges of building safety-critical systems but developer discipline and professionalism are not among them. This is as close as he gets:

Development time and effort for safety-critical systems are so extreme with present technology that building the systems that will be demanded in the future will not be possible in many cases. Any new software technology in this field must address both the cost and time issues. The challenge here is daunting because a reduction of a few percent is not going to make much of an impact. Something like an order of magnitude is required.

Developing safety-critical systems is extremely slow, which adds to cost. But QA practices virtually ensure delivered software functions as specified in the requirements. Uncle Bob could possibly argue that some projects are slow because the developers on those projects are undisciplined and unprofessional. But a claim like that requires evidence and Uncle Bob offers none.

Yes, tools are part of the answer (but not the whole answer)

My goodness, we need more and better tools. When I first started programming, I started with a text editor with basic syntax highlighting, that's it. I used to FTP into the production server to upload my code and run it; I didn't have a development environment.

Better tools have helped me become a better programmer

Later I moved to Eclipse and thought I was stupid for not doing this sooner. Eclipse caught all kinds of errors I missed with the basic text editor. It just highlighted them like a misspelled word in a word processor--brilliant.

A couple of years later I adopted Subversion as my VCS and I thought I was stupid for not doing this sooner. I could see all the history for my project, I could make changes and revert them. It was awesome.

Ditto for:

  • code reviews/pull requests/Jira
  • advanced IDEs with integrated static analysis, automated refactoring tools, automatic code formatting, and unit tests that run at the push of a button
  • GIT/bitbucket/GitHub
  • TDD
  • property-based testing (QuickCheck family)
  • virtual machines
  • frameworks
  • open source libraries

It's been nearly twenty years since I started programming and my tools have changed significantly in that time. I can only imagine how the tools that become available in the next twenty years will change how we write and deliver code.

Let's look at some possibilities.

Better static analyzers

My static analyzers still don't understand my code and can only pick up simple mistakes. They flag tons of false positives. They can be slow on large code bases. And I'd love it if I just have one static analyzer that did everything I wanted instead of 4-5. It's also time consuming to write custom rules. There's plenty of room for improvement there.

Correct by construction techniques

Then there are "correct by construction" techniques. I watched this video. He had me at "a provable absence of runtime errors". So I got a book on SPARK (a subset of Ada) and started learning. Wow, you might be able to write highly reliable and correct software in Spark but it's going to be a slow process (aka expensive).

Is this the future? I don't know but maybe if it was easier to program in Spark it might have a better chance in safety-critical software circles. It would also be interesting if someone developed formal method capabilities for my favorite programming language that were accurate and easy to use. "No need to write tests for this module, the prover says it's mathematically sound," yes please.

October 23, 2020 update:
I recently programmed a sumobot in Ada/SPARK to help me get a feel for the languages. And I think you'll either love Ada/SPARK or hate it. If you love the speed and flexibility of python, you'll probably hate Ada/SPARK. But if you care about low defect rates and high quality, you'll love the features of Ada/SPARK that enable you to achieve those goals.

Software to track each requirement to the code that implements it and the tests that prove that it was implemented correctly

I watched a video where the presenter was talking about the difficulty her team has with tracking thousands of requirements to specific code and test cases and back for regulatory compliance purposes in safety-critical systems. The task became much more difficult as they tried to keep everything in sync while the requirements, tests, and code changed as the project progressed. That team and every team like them needs better tools. And, eventually, I'd love to see that kind of thing built into the IDE for my favorite programming language, if it was easy to use.

Formal specification languages/model checkers

Then there are formal specification languages to consider. The Atlantic article mentions TLA+ but there are others. Now imagine that these languages were easy to use. Imagine that you had a tool that could help you construct a formal specification in an iterative way where it coached you to along to make sure you covered every case. And when you were done, you could get it to generate some or all of the code for you. Plus, if you got stuck you could just find the answer on StackOverflow. Cool? Hell, yes!

And more...

I'm sure we can brainstorm dozens of new or improved tools in the comments that would help us write better, more correct code at a lower cost.

Why increased discipline and professionalism are not the answer

The fundamental problem is that even the brightest among us don't have the intellectual capacity to understand and reason about all the things that could happen in the complex interacting systems we are trying to build. It's not an issue of discipline or professionalism. These system can express emergent behavior or behave correctly but in ways unforeseeable by their designers.

That's why Dr. Leveson's book is so important. Instead of trying to figure out all those states and behaviors we "just" have to specify the states and behaviors that are not safe and prevent the software from getting into those states. Well, it's more complicated than that but that's a part of it.

Conclusion

I'm all for increasing software professionalism and discipline but Uncle Bob's wrong about how to prevent "The Coming Software Apocalypse" in safety-critical software systems. Experts in the field don't rank programmer professionalism and discipline anywhere near the top of their priorities for preventing loses.

More programmer discipline and professionalism can't hurt but we also need ways of taming complexity, better tools, ways to increase our productivity, ways to reason about emergent behavior, research on what actually works for developing safety-critical software systems, new and better techniques for all aspects of the software development process, especially better ways of getting the requirements right, and so much more.

I know there are tons of programmers churning out low-quality code. But organizations building safety-critical systems have processes in place to prevent the vast majority of that code from making it into their systems. So if the software apocalypse comes to pass you can be pretty sure it won't be because some programmer thought he could take a short-cut and get away with it.

What do you think? Agree or disagree? I'd love to hear your thoughts.

Additional resources

Blog post: Safety-Critical Software: 15 things every developer should know

Here's a video of Uncle Bob's software professionalism talk: https://youtu.be/BSaAMQVq01E

Nancy Leveson's book Engineering a Safer World is so important that she released it in its entirety for free: https://www.dropbox.com/s/dwl3782mc6fcjih/8179.pdf?dl=0

Excellent video on safety-critical systems: https://youtu.be/E0igfLcilSk

Excellent video on "correct by construction" techniques: https://youtu.be/03mUs5NlT6U

Oldest comments (59)

Collapse
 
marquist profile image
MarquisT

Interesting - and I agree about the tools. But what type tools can visualize states or requirements that we haven't yet considered? And is it possible that Spark achieves this by preventing unknown states, but it does so at an unacceptable cost. Where is the middle ground?

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I'm not an expert on writing safety-critical software or Spark. Let me just start off by saying that.

I don't know if there is any way to use tools to visualize states or requirements that haven't yet been considered. It's easy to imagine two software systems interacting in completely unexpected ways that would be impossible to cover with a tool that only knew about its own system.

Here's a video of two automatic garbage cans triggering each other with hilarious results:

I think that's what makes Dr Leveson's work so interesting. She's basically side-stepping the issue of trying to enumerate all possible states and examining each one. Instead we just have to figure out the states we don't want our system to have and prevent them from occurring regardless of how these states might arise.

For example, we don't care about the details but we just want to be absolutely sure that a ballistic missile launch drill can't accidentally launch a real nuclear weapon.

I don't think Spark can help with this problem directly. The formal modeling languages will be able to help you detect ambiguity or conflicts in your requirements and that might lead you to discover faulty or missing requirements, but, again, I'm not an expert in this area.

Collapse
 
ghost profile image
Ghost • Edited

Well, I'm with Uncle Bob.

Tools are evolving fast and making easy to spot problems before they become problems.

But, I can say confidently: I know SO MANY developers that aren't even writing tests today (this is something that makes me sad). Show them a static analyzer and it'll blow they minds.

My point here is: they are ignoring all those tools that help to produce better software! And, IMO, this is exactly what Uncle Bob wants to improve when he says:

1: Raise the level of software discipline and professionalism.
2: Never make excuses for sloppy work.

The tools are there, and they are become better each day, but we need to start using those tools as an important allied in our work, and IMO, this is all about discipline and professionalism.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I agree with your assessment, Diego--if we are talking about non-safety-critical software development.

But Uncle Bob's argument was that the problems with safety-critical software system development is that the programmers working in that area lack professionalism and discipline, which I find hard to believe, given everything I discovered while writing this post.

Collapse
 
ghost profile image
Ghost

People doing safety-critical software system development still are people, and still can lack of professionalism and discipline, as every other human being.

In one point Uncle Bob can be wrong: generalizing. Possible the most developers in the field are very professional, but I don't think we can say confident that 100% are using tools to help then improve they code.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk • Edited

I appreciate your comments but I'm having a hard time following your logic.

Suppose you work on a team that is doing a safety-critical avionics project for Boeing at the highest assurance level, maybe a new auto-landing system. Let's say that 99% of your devs are excellent in every way Uncle Bob could mean it but 1% of your devs are terrible/undisciplined/unprofessional.

Assuming a manager somewhere doesn't just outright fire the terrible 1%, we have to imagine a scenario where the terrible devs (regardless of which tools they use) somehow manage to get broken code by your team's internal code reviews, the official QA, testing, and verification required by DO-178C, past Boeing's people, and the extensive testing that will take place to certify the whole aircraft the code will run on.

How likely is it that the terrible devs on your project can get bad code past multiple levels and kinds of QA done conducted by multiple organizations and that bad code causes the system to experience a fault that results in a loss?

No one's saying it's impossible to have defects in safety-critical software. But Leveson, Knight, and others seem to be saying that the kind of scenario I'm describing isn't really the big problem we're facing. The big problem is that we are attempting to build systems that we can't fully understand (which leads to the Leveson quote about the problem being in requirements).

Thread Thread
 
ghost profile image
Ghost

Boing is a nice example. Legally, they have a lot of requirements about they software, and since they take it seriously, the company invest in process to achieve a great level of confidence (not only tools).

Take a look at they process:

  • internal code reviews;
  • the official QA;
  • testing;
  • verification required by DO-178C.

This shows a very mature company that have raised his level of discipline and professionalism. They probably invest a lot of money on the development team to help them become better developers and give them a environment that promotes good practices.

But I'll give you another example: I have a friend who works in a bank here in Brazil. Our government have a lot of regulation about the bank system. The people who work in the softwares who help the bank achieve those regulations found a very professional and mature environment to do they work and delivery great software. But those who work in other softwares from this bank doesn't have the same environment. They don't do code reviews, testing and a lot of other good practices. Yet, they work in pieces of software that can make the bank broke (we can say that this is safety-critical software?).

I do think that when Uncle Bob is talking about raising the level of discipline and professionalism, his talking about not only the professional, but the company too (and perhaps this is more about the company that is about the developer).

Companies who produce great software invest in people, promotes good practices (that leads to use good tools) and creates an environment where people can achieve they full potencial. And this is raising the level of discipline and professionalism for me.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Well said, Diego. I'm always happy to come across another developer who's as passionate about software development professionalism as I am.

I don't think bank software would typically fall under the umbrella of safety-critical software. This link shows the typical kind of industries that I think of when I think of safety-critical software: en.wikipedia.org/wiki/Safety-criti...

I would hope banks would be subject to strict regulations that force them to create quality software but this is not my area so I don't know what that would look like.

I sure hope the people writing software for my bank are engaging in vigorous QA practices such as code reviews, unit testing, and so forth and keeping my money safe.

Thread Thread
 
mrbandit profile image
mr-bandit

Citibank just lost $900 million because of errors in processing an interest payment.
Close enough for mission critical software to me.

Thread Thread
 
nina_rallies profile image
Nina Rallies

Safety critical software have multiple layers of verification/ validation. Bank systems are not safety critical; they are security critical.
In safety critical systems, a failure can harm people ( railways, aerospace, medical devices)
A failure in security critical systems wouldn’t result in harm but rather loss/ exposure of information and assets.
The requirements to pass safety critical software standards and showing compliance with SIL ( I assume they are talking SIL4) is quite extensive.
I don’t think more professionalism is what they are lacking here.

Collapse
 
tomowens profile image
Thomas J Owens

A few random thoughts...


Software professionalism is a really broad topic. What Robert Martin presents is such a narrow sliver of professionalism. When you say "professionalism", I think about things like defining the disciplines (of software engineering, computer engineering, computer science, information technology, and other related fields), education and certification, bodies of knowledge and educational curricula, roles and responsibilities in the workplace, licensure, ethics, and so on.

I've been turned off by a lot of what I've read from Martin, especially some of his writing on what he considers to be a viable professional code of ethics, women in computing, diversity in tech, and other matters of professionalism and professional practice. Although he's made contributions to the agile methods and good construction practices, Martin isn't someone I would necessarily turn to for information about professionalism or professional conduct.


I'm also not entirely convinced by the argument that processes are sufficient.

I've worked in both aerospace/defense and pharmaceutical software (both highly regulated industry) doing both software development as well as process improvement. We have quality system requirements and have things like independent quality assurance, traceability from requirement or feature request through code commits and manual and automated test cases, mandatory peer review, and the need to collect objective evidence of our processes.

Something that I don't see from external sources is a push to improvement. Often, these regulations are incredibly slow to change. So when new advancements come out, it takes a very long time (measured in years) to update regulations, if they even get updated to include changes. And often, these companies working in the safety-critical space are larger more bureaucratic organizations. This means that it's harder to change the internal processes.


I find myself agreeing with Dr. Nancy Leveson's comment that the emphasis is on coding and testing. But, as far as technical work, that almost makes sense. Everyone writes code (and everyone should be writing some kind of tests). However, not even is working to captured requirements or has to worry about traceability or so on.

I do agree that a broader look at the tools used to build software is good. And this is full lifecycle tools. Requirements through code into deployment. Traceability, static and dynamic analysis, visualization, forward and reverse engineering. All of these do have their places. I think we're caught in a cycle. The people who need these tools to meet requirements are relatively small, so the tools don't improve. But in order to catch on, they need to be easier to use and more streamlined.

The only tool that I'm not convinced of us formal specification languages and model checkers. But then again, I've never worked at the extreme end of safety-critical software. I wouldn't be surprised if there is a subset of software-intensive systems where this makes sense, but I just haven't experienced it yet.


I think that there are two things that are really hard to wrestle with that we, as software engineers, still need to deal with.

First is communication. I think it's really hard to communicate, especially across industries and domains. There's no good common terminology for sharing how we work. The same words and phrases are used to mean different things to different people. Likewise, the same concept can be represented by different words and phrases. And that's just if everyone is speaking English. It's hard to talk about the way we work when it takes a lot of effort to define common concepts.

Second is licensure. Especially in the world of safety-critical systems, there's no good licensure for software engineers. Part of this goes to how hard it is to communicate with each other. Part goes to the educational background of software engineers. Part goes to the sheer breadth of the field and all of the different environments and domains. I'm opposed to requiring blanket licensure of software engineers, but I do think that this is something that we should be able to talk about to make sure that individuals don't take shortcuts or provide cover for those who do. But it's a near impossibility until we can actually talk about our discipline and come to an understanding between people who work in regulated and critical environments and those who don't.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I think it's almost unfair to say that Uncle Bob is representing the interests of all software developers. When I listen to his talks I get the feeling that his brand of professionalism is aimed at people operating at the craft end of the profession and less so at the developers who are working more like professional engineers. I wouldn't have written this post at all except that the article in the Atlantic was very clearly talking about safety-critical software and I thought Uncle Bob's response wasn't quite fair.

I'm not sure I understand what you mean by communication problems. Examples?

Licensing is interesting. Doctors do it, engineers do it, lawyers do it but so do plumbers, electricians, etc.. I don't see any reason why we couldn't do it. In fact, I think we should do it. Not everyone but if you want to work with credit card data or personal information like medical records you need a certain level of certification. If you want to work on software that controls hundreds of millions of dollars, that's another level. And if you want to work on safety-critical systems, you need to be licensed for that too. There could be all kinds of levels.

The problem with licensing is what do you require the developer to know and/or do to hold a license? The science in our profession is pretty thin. Is agile in or out? Is TDD in or out? Is it okay to write safety-critical software in Python? Is it okay to create a design without using UML? See the problems?

Thomas, as someone who has worked in highly regulated industries, where do you see things going in the future? As we try to build self-driving cars, add AI to everything, add IOT to everything, build all kinds of robots, and generally control more of the world with software, how do we:

  1. make this software safe as it gets more and more complex?
  2. increase the speed / reduce the cost of this kind of software?
  3. train enough software developers in the techniques required to safely build these kinds of systems?
  4. move promising techniques and/or tools into these highly regulated industries to improve the quality and cost profile of these projects?
Collapse
 
tomowens profile image
Thomas J Owens

When I listen to his talks I get the feeling that his brand of professionalism is aimed at people operating at the craft end of the profession and less so at the developers who are working more like professional engineers. I wouldn't have written this post at all except that the article in the Atlantic was very clearly talking about safety-critical software and I thought Uncle Bob's response wasn't quite fair.

I don't think that there should be a huge divide in the industry. Good software engineering practices are useful to everyone. An individual practice may or may not be something good for an engineer, a team, or an organization to adapt, but it comes down to cost versus benefit. As you move closer to the safety-critical end of the spectrum of software-intensive systems, the benefit of doing something outweighs the cost, especially when you start talking about human lives at risk if there is a failure.

I'm not sure I understand what you mean by communication problems. Examples?

Someone says "Scrum", I think exactly what is described in the Scrum Guide, but they are referring to their adapted process that's based on the Scrum Guide. Sometimes, "design" is used to mean planning out the structure and organization of software ("big design up front" or waterfall design before code mindset), but writing code is a better example of design in software. There's no universally (or even widely, perhaps) accepted definition of what a "unit" (in the context of a unit test) is - some people use words like method or class to define a unit, but I prefer Wikipedia's definition that it's the "smallest testable part of an application".

The problem with licensing is what do you require the developer to know and/or do to hold a license? The science in our profession is pretty thin. Is agile in or out? Is TDD in or out? Is it okay to write safety-critical software in Python? Is it okay to create a design without using UML? See the problems?

These are the problems. I think the first sentence of the Manifesto for Agile Software Development is very applicable, even to people who aren't following any of the other principles: "We are uncovering better ways of developing software by doing it and helping others do it." We're all learning here. We can learn from the people who have gone before us and from each other, but we're all in a different situation with regards to customer expectations, legal or regulatory requirements, tools and technology. There's no silver bullet software process or framework, so it's hard to build a licensure test.

NCEES already offers a PE exam in software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge also exists. And there are common things that software engineers should be able to talk about. However, due to the nature of the field, software engineers tend to specialize. I'm familiar with a good breadth, but my depth is in development processes (although I write code, I was also a process improvement engineer at my last job and a Scrum Master at my current job) and project management. There's plenty of important stuff that I would need to do research on to tell you what the current state of affairs is. But the key is that my education gave me exposure to be able to go out, do that research, and understand it (or know that I need to do more background research and then understand it).

But these problems are only compounded by the communication problems I identified above. Licensure is usually handled by laws. Laws define what work requires a license and work that does not require a license. If we can't craft laws that are very clear about what kind of work requires a license and what kind of work doesn't and then provide current and relevant content on the exam, then we won't be in a better place.

Thomas, as someone who has worked in highly regulated industries, where do you see things going in the future?

I think a few things.

We need to get better at ethics and ethical decision making. We've been fortunate with respect to lives, but also look at Uber's Greyball or the Volkswagen emissions scandal. Software is everywhere and companies have access to vast amounts of data about users - location data, payment information, the websites visited, and so on. Even things that aren't safety critical (where failure can lead to injury or death) need to consider the public good and respect the rights and privacy of users.

We need to get better at communication. All of us, no matter where we work, should be talking (to the extent possible, of course - not sharing proprietary information or other sensitive information) about the way we work, why we work that way, and the tools and technology that facilitate our work. I don't think we should expect everyone to work in the same way, but I think more people should understand what is expected of engineers in regulated industries and why the regulations are. I think the regulated industries have a lot to teach everyone else about good practices, even if the set of processes and practices as a whole doesn't make sense for everyone else.

Specifically in regulated industries, I think that the regulations and auditors need to be more open-minded about alternative ways to achieve the requirements and objectives of the regulations. My current company is very agile and has managed to tie an agile mindset to the requirements. My previous company was trying to move in that direction when I left, but it's an uphill battle when everyone is used to or expecting a certain thing and you give them something different. Organizations should be encouraged to try to look at new good practices and find ways to incorporate them into their way of working and to disseminate information about the way they work outwards.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Good points, Thomas.

Yes, I see the communication issues now that you've given some examples. This is, indeed, a common problem in our industry.

Regarding training and licensing, I once dated a doctor going through her residency and that's a very interesting model (this is in Canada - other countries are likely different). Doctors must graduate from medical school but then during their residency, they do rotations in all kinds of areas so at the very least they know how the different specialties work. It doesn't matter if you are going to be a family doc or a brain surgeon, you rotate through an array of specialties, even though you spend most of your time learning your specialty. Then at the end of your residency you need to be recommended by your superiors to take 'the board' exams. If you pass you become a licensed doctor. If not, you may be given a chance to repeat the year and retake the exam or they might decide you don't have what it takes and end your career right there.

That would be an interesting licensing system for software developers who wanted to be licensed. Of course, we don't have an institutions to support such a system but imagine if we did for a moment. Think of the potential quality of the developers that made it through something like that.

Thread Thread
 
tomowens profile image
Thomas J Owens

Not only do we have the institutions to support that kind of system right now, the educational systems that produce software engineers are sometimes very different than those that produce other engineers, doctors, lawyers. Not all software engineers go through traditional college or university education. And even those that do may not go through any kind of formal education in computing. However, these people are just as capable of being great software engineers.

Part of the reason for this is the low barrier to entry. The tools and resources needed to design and build software products are much more accessible than tools and resources needed to build many other types of engineering products. This, plus the easy-to-obtain educational resources make all of this possible.

Any kind of system needs to consider people who don't have formal education in computing.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Good points. We could also add non-native English speaking to the list as well.

Does anyone have direct experience with the training offered by the Software Engineering Insitute at Carnegie Mellon? They offer courses that appear to be semi-on-point here. I've looked at these courses before but they are government/defense focused and I work in a small business so I didn't get far.

Thread Thread
 
mrbandit profile image
mr-bandit • Edited

I do mission-critical embedded systems. I had the traditional CS education as of 1980: state machines, data structures, OS theory, language theory. We had to also show, via projects, we understood the theory.
I was talking to a fairly recent graduate of my college. Turns out the EE courses teach embedded systems, but the CS doesn't. I shudder at the idea of EEs writing firmware - I have seen the results. Nothing against EEs, but hardware and software are two distinct disciplines. The mindset is different. I am planning on discussing this with current professors, both on the SW and EE side.
It seems to me part of the problem is silos in education. Another is while many software principles are the same, domains have their own unique capabilities and requirements. Rather like medicine.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Good points.

My experience in hiring and working in industry is that new CS grads are basically beginner-level programmers unless they taught themselves how to write code well. CS degrees don't teach the right things to people who want to hit the ground running in industry. And silos are definitely part of that.

However, I have seen counter-examples too. For instance several universities work on cubesat projects. The CS people write the software, the EE people do electronics, the aerospace engineers do their part, the marketing people do marketing and promotion, etc. I'm sure it's not perfect but it's a step in the right direction.

Cheers.

Thread Thread
 
mrbandit profile image
mr-bandit

I would like to see a reasonably complex project either at the last part of Jr year or the beginning of the senior year, where it is combines FW/EE/MechE. Maybe a robotics competition between several teams. Something that is relatively close to the kind of project they will do professionally. Judging would be about the entire process (requirements, project plan, schematics, code, hardware plans, etc). Some sort of cash prize. If you want marketing (shudder :^) involved, then maybe the goal is a kit that could be sold for other robotics competitions.
Part of real projects are the restrictions. For example, the materials used, size of PCB, small microprocessor (I would suggest an Arduino Pro-Mini or similar), total amount under $100, number of students on a team (1 FW, 1 HW, 1 MechE) etc. Most makerspaces have the resources if the college does not have them. I can see this growing to several colleges being involved.

Comments? Ideas?

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I'm in favor of anything that helps schools produce more successful graduates. How exactly that is best achieved is not my area of expertise. I have friends who are teachers and when they start talking about how teaching and learning actually work I am reminded that they have specialized knowledge too.

I've heard many different ideas for how to turn out better grads.

One grad had a senior level class where they had to contribute to an open source project and add a significant feature as a small team. Then the next term the following class had to do the same so people got experience maintaining and extending working code, submitting pull requests to "real" programmers, and writing code that has to actually work in production for longer than a three month school term.

I like that idea. Very little industry time is actually devoted to greenfield projects, especially for new grads. And even new projects aren't new for very long. Writing a few thousand lines of code to make an Arduino do something cool is one thing but a more realistic scenario for a new grad is squashing bugs and adding small features in a poorly written 100+ KLOC code base. And those are very different skills.

I don't love robotics as a school project because of the cost and effort required to make anything remotely impressive and because it's very embedded-focused, which isn't where most grads in North America will work. I realize how hypocritical I sound considering I was the one who mentioned cubesats but interacting with the real world is very hard. Simple appliances and consumer electronics might work if you want to promote embedded. I've built a few gizmos for my house over the years and I've always enjoyed working on them.

But, like I said, not my area of expertise. Robots definitely have a wow factor.

Cheers.

Thread Thread
 
mrbandit profile image
mr-bandit

I have seen first hand how robots get kids interested in STEM. Checkout roborave.org/ as a good example. Robots are this generation's "drug vector of choice" into STEM. You gotta get them hooked on engineering and science. Roborave starts at 4th grade through college. One of the cool things is it's truly international. It got so big, they had to branch South America off as a completely separate entity.

Different competitions need different resources. Roborave teams might spend $200 on a robot. The big ones can be fairly costly. The tag line is "Today's play, tomorrow's pay"

The big hook is robots are fun. Robots have a unique characteristic: there are a lot of problems that each take 3..5 days to solve. This gives the kids a lot of problems - increases their problem-solving skills, and they keep at it. These two aspects are critical, because it keeps them going.

A friend is a teacher of robotics at the Jr High and Sr High level. He enters his kids in 5..6 competitions per year. The kids are so into it he has to go to the robotics lab at midnight every day and tell the kids to GO HOME.

You mentioned open source projects. Google Summer Of Code (GSoC) is exactly that. An applicant submits a proposal to fix some part of open source projects. The proposals must be realistic - the work is done over a summer. The winners are pairs with mentors in industry - Google puts up $6K for a successful project. My son did it one summer and had a blast. It gave him a real taste of the real world. His mentor was very impressed.

The problem with GSoC is the number of slots. It is world-wide, but the USA has 125 slots.

Would I like to see High school kids participate in something like that? Yes. Make it a class in the last period, so they are not forced to stop at the end of the class. The biggest problem I can see is finding teachers that are tech-savvy enough for it. A robot is fairly constrained. Finding holes in open-source code && aiding the students in fixing them is a more focused skill set. Not impossible, just harder. Might need to find retired engineers (who have been programming for decades) to come in as mentors.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Yeah, I think you've unintentionally found the problem: robots get people interested in STEM and then all those bright, excited people get to university and don't learn the right things to be successful in industry when they graduate (and probably waste a lot of time in completely useless classes along the way).

Thread Thread
 
mrbandit profile image
mr-bandit

My college CS program (late 70's) gave me the useful tools for learning new things. The most useful are OS theory, data structures, and state machines. It took being in industry to learn the basics of systems engineering (SE) . So I basically agree with you. The basic problem is time. The entire CS program would need to be engineered around making SE an integral part of the program. Meaning every class with a project starts with the SE process and carries it through the entire project, and the grades also depends on the SE process. Given what I know of academic departments, this is non trivial.

Collapse
 
mrbandit profile image
mr-bandit

I agree with you. I would like to see licensure, but my concern is who writes the test. What tools are assumed. Perhaps the INCOSE model.

Collapse
 
bgadrian profile image
Adrian B.G.

"Experts in the field don't rank programmer professionalism and discipline anywhere near the top of their priorities for preventing loses."

It will happen if more accidents will be software related. The number of source code lines is growing exponentially, the number of physical parts is not, so the chance of software failure will increase.

The "JavaScript" generation will have to replace the "Cobol/C" micro controllers programmers sometime, because of the age. What will happen with the professionalism then?

It may be that the current developers that work with software than can kill people is working at a higher standard, but this things will probably soon change. The demand is growing, the costs has to be reduced, the education system is flawed, most of the developers don't apply standard quality checks (global speaking), etc.

We should take all the info with a grain of salt, Uncle Bob is a human after all, he make mistakes and he thinks differently then me and you. I admire him but I take his lessons trough my firewall.

I think that Uncle Bob "awareness campaign" is good but way too small compared to the industry, it still haven't reached mainstream. Even if he hurt some programmer's feelings, I think he should continue and do this statements at a larger scale, maybe it will stop future bad thing happening.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Good points, especially:

  • the software part of the system is growing faster than the hardware parts, making future failures more likely in software
  • the older generation isn't training a newer generation to fill its shoes
Collapse
 
corycollier profile image
Cory Collier

Blaine,

First of all, I appreciate this piece very much. I really like the references you have here. I don't see many people quoting that Fast Company article, but it's been a mainstay of professionalism arguments I've had for a long time.

I didn't take from Uncle Bob's article that professionalism was lacking from developers. I do agree that better tools might help with improving code quality.

My experience has been, and continues to be, market forces tend to drive code quality down. Most developers that I know, given the time, would make code that is very high quality. The challenge there, is that time costs money, and those who are driving financial decisions want to spend as little money as possible for their product.

On something like an automated car, errors could kill people. Death of people costs a lot of money. For that reason, more time is allotted to ensure the car doesn't hurt people.

For something like a marketing website, errors could simply cause the website to not work. This isn't likely to be a very expensive problem, so less time is put into craftsmanship.

I agree with you - the problem isn't professionalism. The problem is investment.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks, Cory. I couldn't agree more.

Management often thinks the cheapest software has few QA controls but that's a fallacy (construction is cheaper but they pay a (often huge) premium for testing, debugging, and maintenance).

Software with no QA controls is at high risk of never being released. And it will almost certainly be full of bugs if it is released. At the other end of the spectrum is safety-critical software where they spend extreme amounts of money trying to ensure the defect rate is extremely low. And the maximum ROI for an average project uses a moderate number of QA controls. They catch lots of the defects at a fairly low cost per defect discovered.

Most of the projects I've seen and software I've used almost certainly could have been developed more cheaply if they moved a little to the right (or more than a little) on the spectrum and improved their quality.

Collapse
 
mrbandit profile image
mr-bandit

Boeing failed with the 738 MAX because management wanted to save some money. It cost them way more than they saved. They ignored the engineers.

I was just on a DAL A DO-178C project. An enormous effort went into the process. Everybody on the team were highly skilled and experienced. The company put in sufficient resources.

Thing is, if someone is not up to the required level on a project, it becomes extremely obvious very quickly. If someone is an intern, or a new hire, that is fine because the team recognizes the situation and handles it accordingly. We all started somewhere. But I have been on a couple of teams where it was was painfully obvious someone who was supposed to be at a particular level, but they were not. It then becomes a management problem.

Collapse
 
mrbandit profile image
mr-bandit

Addendum to below.
See spectrum.ieee.org/aerospace/aviati...

[quote] I have been a pilot for 30 years, a software developer for more than 40. I have written extensively about both aviation and software engineering. Now it’s time for me to write about both together.

The flight management computer is a computer. What that means is that it’s not full of aluminum bits, cables, fuel lines, or all the other accoutrements of aviation. It’s full of lines of code. And that’s where things get dangerous.

Those lines of code were no doubt created by people at the direction of managers. Neither such coders nor their managers are as in touch with the particular culture and mores of the aviation world as much as the people who are down on the factory floor, riveting wings on, designing control yokes, and fitting landing gears. Those people have decades of institutional memory about what has worked in the past and what has not worked. Software people do not.

In the 737 Max, only one of the flight management computers is active at a timeβ€”either the pilot’s computer or the copilot’s computer. And the active computer takes inputs only from the sensors on its own side of the aircraft. [/quote]

This is a violation of safety-critical systems: you never have a single point of failure. This was a management failure. They dictated there could not be any hardware changes to the aircraft, and there would be no additional training for the pilots.

[quote] It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer’s determination of an impending stall. As a lifetime member of the software development fraternity, I don’t know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake.

But I do know that it’s indicative of a much deeper problem. The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it. How can they implement a software fix, much less give us any comfort that the rest of the flight management software is reliable? [/quote]

I'm not sure this is completely true. What is known is management put severe restrictions on the redesign such that failure was inevitable.

Collapse
 
az4212 profile image
nameless

I think we're not comparing apples with apples.
Go to any non regulated industry (and even some regulated) ones and you do see a lot of sloppy code practices and devs who don't give a shit, and don't get me started on the el cheapo consultancies bringing in freshly minted 'experts' (who did a 6 week crash course on some new tech) who are happy to just forward emails around because they don't want to take responsibility for anything.
But if you're talking about industries in which there is significant lose to human life - then you'd fine the majority of those devs are highly professional, not Silicon Valley Fratboys or cheap imports from a third world country. Most of those devs have solid engineering backgrounds and are very professional.

So you're right - professionalism is not really an issue for certain industries, as those industries can pay well enough and are regulated enough to enforce professionalism. But Uncle bob is also right as the majority of non-regulated industries are populated by cowboys and developers who don't give a f#

Collapse
 
miniharryc profile image
Harold Combs

I rather think it's both. We need better tools, and we need to become more professional.

The central tension in most of these articles seems to be thus:

  • Programmers have a God-complex. We love demonstrating how smart we are. We can't stand to be wrong.
  • Our work products demonstrate "we" are wrong quite often.

Those two positions are irreconcilable. So, we subdivide "we" and say "It was a requirements problem" or "It was a process problem." This is a cop-out, IMO.

Better Tools

If you look at software development over the past 80 years, it's been a constant move towards better tools:

  • Raw plug-boards
  • Stored program / Von Neumann architecture
  • Assembly supplanting binary OpCodes
  • Complied languages like FORTRAN and COBOL
  • structured programming (limiting usage of GOTO)
  • encapsulation via O-O
  • Message passing/Immutability supplants mutable shared state.

What's sort of interesting in the above is it's the history of limiting ourselves. We no longer re-wire the hardware, we write programs on general purpose CPUs. We no longer use GOTO; we structure control flow.

Why do that? Isn't the lower-level stuff more powerful?

Sure it is, but it's also running with scissors.

I feel like right now, with the re-emergence of FP and true immutable, message-passing like in Erlang, things might just be getting more stable. Complexity is going up, but if we build out of atomic, encapsulated units that are themselves well-tested, reliability and fault-tolerance should increase.

Yet...More Professional

I'm on the record as being no particular fan of Uncle Bob. His approach is off-putting, strident, and self-referential. You failed because you weren't "good enough" or "strict enough." This stinks of "XP fails because you weren't XP enough." Let's get past that, okay? Waving "Clean Code" around like scripture gets you nowhere, particularly with management or customers.

That said, he makes good points. If we're honest with ourselves, as a profession we absolutely slip everything to make schedule. We do what's needed to be done, often because we under-estimate.

Collapse
 
abelgaxiola profile image
Abel Gaxiola

I see it all the time: no unit testing, no integration testing, no code reviews, no documentation, no... I enjoy listening to Uncle Bob and also enjoy his books and articles. And yes, I do believe that Uncle Bob is serious.

For me, the answer is to do the best I can, with the tools I have. That is why I decided a long time ago to never stop learning and always strive to improve my skills. And yes, it takes discipline and love for the profession.

Collapse
 
davidthewatson profile image
david watson

The solutions put forth by you and Nancy and Uncle Bob are not mutually exclusive, but they are missing an important enabler of bad software: management. At the end of the day, good requirements and professionalism take time, aka cost. The potential upside is quality. I say "potential" because the financial success of a thousand startups (despite the failure of 10x that number) has taught us that rewards accrue to those who frequently chose "right now" over "right" when asked about shipping - damn the QA. In that sense, durability requirements of "runs once" vs. "runs a thousand times without failure" are important and frequently difficult to reverse engineer if you got them wrong when you shipped the prototype. But those decisions still need to be made explicitly or they will emerge through bug reports much to the disdain of those management enablers. So that leaves us with requirements, professionalism, and management. Call it organizational dynamics - the single most important cultural attribute is what we call "safety": all else being equal, everyone on the team must feel safe to speak truth to power, because getting to the truth is really what engineering is all about.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You make very good points, David.

If management wants its devs to produce the cheapest, crappiest, most buggy software on the planet, it's probably next to impossible for individual devs or even a whole team of them to resist that pressure in a significant way and deliver quality product.

Collapse
 
mrbandit profile image
mr-bandit

The Boeing MAX 737 root cause was management. Not engineering.

Collapse
 
kevinrstone711 profile image
Kevin R Stone

Professionalism is a hard term to define, but the goal is high quality software.

High quality software is built when quality is valued over other things. This can happen both at the individual or institutional levels. A good professional developer can point out missed requirements, unhandled states. A good professional company can also catch these ahead of time with good process and by eliciting feedback from experts.

In Bob's article about tools, he mostly argues that while tools are great and can help us avoid mistakes, they are useless for people who don't care to use them correctly. At least that's the gist I get. If you value quality over other things, new tools can make you an even better developer, and possibly save you from disaster. If you don't, then you will likely either not use the new tools, or use them incorrectly.

Collapse
 
ratmice profile image
ratmice

Another good talk by Perdita Stevens.
youtube.com/watch?v=mx9eqyXrNAk

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks. I'll have to check it out.

Collapse
 
bmoo profile image
Brad Moore

Your article, like so many other replies that I've read, curiously does not cite the Atlantic article. Which makes me wonder, "Did you read it"? Its easy to remove Uncle Bob's argument from its context and then knock it down, but its cheap and lazy.

Collapse
 
bousquetn profile image
Nicolas Bousquet

Processes are able to detect some easy to spot defect like a software that fail to run its unit tests. It may enforce 100% code coverage, but it can't ensure that the unit test suite actually make sense.

Processes can ensure there a requirement associated to each line of code, but not that this association actually make sense or that the requirement is actually a great idea or not.

The processes and QA can only detect some type of problems and they do not provide any creativity or intelligence by themselve.

The humans behind, the individuals that do the work are doing that and different team will achieve different results. There failure in critical systems too and many companies applies all the recent methodologies and best practice fail to upgrade their legacy software that is now working fine and did for dozen years and that was build most often with far less tooling available.

The human factor is key in any project because humans are actually implementing the project. A mediocre team of developpers, managers, products owners and alike will fail more often, be slower and will produce software with more bugs, including bugs in critical systems. There no way around that.

You can given them the best tools in the world they fail to leverage them.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I agree. If you don't have quality people, the best tools and processes will not be enough to delivery outstanding software.

Collapse
 
stevemushero profile image
Steve Mushero

Traditional safety-critical stuff such as avionics are indeed most excellent, but it seems we are sliding down a slope to mediocrity as the amount of software goes up many orders of magnitude, including self-driving cars, IoT of all kinds (including industrial), and much more.

There are not 1000x the skilled practitioners (nor investor patience) that we have historically had - something has to give, and I fear it's safety.

To me there is ample evidence in many of these areas that "programmers are generally undisciplined" and/or there is not time, patience, investment, nor willingness to really do it right - we'll see when we get 100+ auto-driving cars out there, drones over our heads, and everything connected to the Internet.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I couldn't agree more, Steve.

SpaceX is somehow using Linux for basically everything: in its rockets, the dragon capsule, launch control and monitoring. The linux kernel is not built to safety-critical standards so I'm not sure how they are getting away with it. NASA made a fuss about SpaceX's software development practices a couple of years ago and that all kind of faded away.

Here's a great talk about the concerns people have about using linux in safety-critical settings:

I believe software developers--more or less--deliver the software that their employers truly want (what they say they want is often different).

An employer may say they want secure software with low defect rates but they don't provide training, they don't implement the practices or use the tools that we know lead to better software, the requirements keep changing, the staff have questionable skill, they insist on an aggressive schedule, etc.

So, yeah, I think cars, IOT systems, and drones will kill people. Data breaches aren't going away either. There are only two things that I think might bend the curve here:

  • regulations with teeth and strong enforcement
  • software liability laws need to change
Collapse
 
stevezieglerva profile image
Steve Ziegler • Edited

Great post. This is unique content and much more stimulating that the typical "Java is dead!" post. I read it for the headline and was shocked to see a reference to Professor Knight, my CS340 professor at UVa! He was into formal specification methods based on math and set theory, and documented in mathematical formulas.

While I appreciated the accuracy of that approach in school, I realized its limitations in the real world. I work for a consulting firm, not with safety-critical systems, but with other important, mission-critical systems. I don't think most of the customers, developers, or testers can easily read or understand formal specifications. Even with the most accurate specification, it comes down to thorough testing and monitoring to know if it's met.

I believe in Humphrey's "Law" that customers don't know what they want until after the system is in production (maybe not even then). Complex problems aren't fully understood in the beginning on paper. Quick iterations and prototype testing in the wild often raise important, unforeseen issues hidden by complexity and false assumptions. I really love Henrik Kniberg's post on how his kids won a Lego robot competition by using an iterative design/build/test approach and going against common trends.

I think the right balance of upfront design, iterative testing, and software professionalism is required to make any system, especially safety-critical systems, work correctly.

Thanks for writing and keep it up!

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks for the kind words, Steve.

I totally agree with your comments.

I read an old engineering book a few years back and the author talked about how the Brits built the wings for a particular fighter aircraft with which the author was involved.

They needed the wings to be 'just strong enough'. Any extra material wasted preciously resources but it also required more fuel, reduced range, reduced speed, and reduced maneuverability. They didn't have a supercomputer to run a simulation. So they built the wings iteratively. They started with a wing they thought wasn't strong enough, turned it over and loaded it with sandbags until it broke. Then they reinforced the weak point and repeated until the wing was strong enough. Brilliant, right?

Have a good one.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.