DEV Community

Cover image for The ONE chart every developer MUST understand
Blaine Osepchuk
Blaine Osepchuk

Posted on • Edited on • Originally published at smallbusinessprogramming.com

The ONE chart every developer MUST understand

Our industry is famous for delivering projects late and over budget. Many projects are cancelled outright and many others never deliver anything near the value we promised our customers. And yet, there is a subset of software development organizations that consistently deliver excellent results. And they've known how to do it since the 1970s. In this post I'll tell you their secret.

It all starts with understanding this one chart from Steve McConnell. It describes the relationship between defect rates and development time.

Chart showing the relationship between defect rate and development time.

This chart says that most teams could deliver their software projects sooner if they focused more effort on defect prevention, early defect removal, and other quality issues.

But is this chart true?

Steve McConnell published this chart in a blog post titled Software Quality at Top Speed in 1996. This chart (and the blog post) summarizes some of the data in his excellent book Rapid Development. And that book is based, in part, on the research of Capers Jones up to the early 1990s. I'm throwing in all these dates because I want you to know just how long we've known that:

In software, higher quality (in the form of lower defect rates) and reduced development time go hand in hand.

Anyway, Capers Jones kept doing his research and he released another book in 2011 with co-author, Olivier Bonsignour titled The Economics of Software Quality. They analyzed over 13,000 software projects from over 660 organizations between 1973 and 2010 and collected even more evidence that:

... high quality levels are invariably associated with shorter-than-average development schedules and lower than average development costs.

In other words, Steve McConnell's chart is true.

So what's the problem then?

There are three problems.

Problem 1: we're ignoring the research

The majority of projects are run as if this chart isn't true. Hardly a day goes by when I don't hear of some project or someone exercising poor judgement and then predictably getting smacked down by the universe for it. Literally billions of dollars are lost every year to this foolishness. It's been going on since we started programming computers. Every developer has experienced it. And there's no end in sight.

For example, pressuring yourself (or succumbing to external pressure) to go faster by cutting corners is almost guaranteed to increase your defect rate and slow down your project. Yet it happens all the time!

But the problem runs deeper than that. Managers are responsible for the worst project disasters. We have people running these projects who, while well-intentioned, have little idea what they are doing. Many of their projects are headed for disaster from the outset (see the "classic" software mistakes below). And by the time they realize that their project is in trouble--usually months after the developers reached the same conclusion--it's often too late to do much about it.

Problem 2: small project development practices don't scale well

The development practices that work relatively well for small projects, don't scale to large, real world projects. Small projects are the only kind of projects most students work on. So they graduate with the false impression that they know how to develop software. But they've been building the equivalent of garden sheds when we are trying to hire them to build the equivalent of skyscrapers. A skyscraper isn't just a really big garden shed--they are completely different things.

Garden sheds aren't the same as really big skyscrapers

And because so few organizations do software development well, many teams employ garden shed-appropriate methods to tackle skyscraper-sized problems. So these poor developers think chaos, confusion, bugs, conflicting requirements, endless testing cycles, missed deadlines, stress, piles of rework, and death marches are all normal parts of software development.

Problem 3: many teams don't have the required skills

You need more than raw technical skills to achieve low defects rates in real-world projects. You need a whole suite of organizational, managerial, and technical level strategies and tactics to pull this off. You'll almost certainly need additional training for almost everyone in your organization and it also requires you to embrace different development practices.

What does it take to achieve that 95% pre-release defect removal rate?

For most organizations, it will take quite an adjustment to achieve that 95% pre-release defect removal rate. But the good news is that even modest improvements in pre-release defect rates will positively impact the economics of your project.

With that in mind, I suggest the following steps:

  1. Accept the truth that high-quality software is faster and cheaper to build than low quality software
  2. Be aware of Steve McConnell's "classic" software mistakes
  3. Use memory-safe languages whenever possible
  4. Start improving your development practices

Let's dive in.

Accept the truth that high-quality software is faster and cheaper to build than low quality software

If you need more evidence than you already have, read The Economics of Software Quality to truly convince yourself and your teammates that this chart is telling the truth. The authors of this book leave very little doubt that:

The best available quality results in 2011 are very good, but they are not well understood nor widely deployed because of, for one reason, the incorrect belief that high quality is expensive. High-quality software is not expensive. High-quality software is faster and cheaper to build and maintain than low quality software, from initial development all the way through total cost of ownership.

Furthermore:

If state-of-the-art combinations of defect prevention, pretest defect removal, and formal testing were utilized on every major software project, delivered defects would go down by perhaps 60% compared to 2011 averages.

Why is that?

Low quality projects spend much more time on testing, debugging, fixing, and rework than high quality projects. In fact, low quality projects contain so many defects that testing often takes longer than construction. And low quality projects frequently stop testing long before they run out of bugs to find.

On waterfall projects they either release the software as is or cancel the project because testing and fixing will go on forever otherwise. On agile projects increments of work are completed quickly at first and then slow to a glacial pace as more and more problems are discovered in existing code. Eventually low quality agile projects reach the same options as waterfall projects: release as is or cancel the project.

High quality projects invest in defect prevention and pretest defect removal activities so that when they do get to testing, there are many fewer defects to find and fix. High quality projects are released sooner and cost less than low quality projects because they have much shorter testing phases and much less rework. And high quality projects also have fewer post-release issues to fix. So when they do need to make changes, the code in high quality projects is easier and cheaper to modify.

Let's look at some key points from The Economics of Software Quality:

  • Overall quality levels have not changed much between 1973 and 2010. IDEs, new languages, interpreted languages, automated testing tools, static analysis tools, better libraries, frameworks, continuous integration, thousands of books, Agile, Scrum, XP, OOP, TDD, and the whole fricking web haven't moved the needle! That's just depressing. (I know someone is going to argue that this point can't be true. Feel free to look it up on pages 538 and 539 of The Economics of Software Quality).
  • In low quality software projects nearly 50% of the effort is devoted to finding and repairing defects and rework.
  • Defect rates rise faster than project size. That means the things you need to do to ensure a 95% pre-release defect removal rate in a 5 KLOC project are completely different than the things you need to do in a 500 KLOC project. Bigger projects not only need more QA activities but they also need different QA activities. Remember, a skyscraper isn't just a really big garden shed.
  • Testing has been the primary form of defect removal since the software industry began and for many projects, it's the only form used. That's a shame because testing is not that effective. Even if you combine 6 or 8 forms of testing you can't expect to remove more than 80% of the defects in large systems.
  • Defect prevention methods include reuse, formal inspections, prototyping, PSP/TSP, static analysis, root cause analysis, TDD, and many others. The factors that hurt defect prevention the most are excessive requirements changes, excessive schedule pressure, and no defect or quality measures.
  • The most effective forms of pretest defect removal are formal inspections and static analysis. But the authors also discuss 23 other methods of pretest defect removal, their range of expected results, and when you might want to use them.
  • The ROI on the better forms pretest defect removal is more than $10 for every $1 that is spent.
  • This book discusses 40 kinds of testing, their range of effectiveness, and when you might want to use them.

I think this book's true value comes from the evidence it provides to help you argue against ideas and practices that have an especially negative effect on the cost and schedule of your project. For example, after reading this book it's hard to argue that you don't have time for code reviews.

Be aware of Steve McConnell's classic software mistakes

In chapter 3 of Rapid Development, Steve McConnell lists 36 "classic" software mistakes.

Falling victim to even one of these mistakes can condemn your project to slow, expensive development. You need to avoid all of them if you want to be efficient.

They are:

  1. Undermined motivation
  2. Weak personnel
  3. Uncontrolled problem employees
  4. Heroics
  5. Adding people to a late project
  6. Noisy, crowded offices
  7. Friction between developers and customers
  8. Unrealistic expectations
  9. Lack of effective project sponsorship
  10. Lack of stakeholder buy-in
  11. Lack of user input
  12. Politics placed over substance
  13. Wishful thinking
  14. Overly optimistic schedules
  15. Insufficient risk management
  16. Contractor failure
  17. Insufficient planning
  18. Abandonment of planning under pressure
  19. Wasted time during the fuzzy front end
  20. Shortchanged upstream activities
  21. Inadequate design
  22. Shortchanged quality assurance
  23. Insufficient management controls
  24. Premature or overly frequent convergence
  25. Omitting necessary tasks from estimates
  26. Planning to catch up later
  27. Code-like-hell programming
  28. Requirements gold-plating
  29. Feature creep
  30. Developer gold-plating
  31. Push-me, pull-me negotiation
  32. Research-oriented development
  33. Silver-bullet syndrome
  34. Overestimated savings from new tools or methods
  35. Switching tools in the middle of a project
  36. Lack of automated source-code control

Rapid Development was released over 25 years ago. Yet 35 of those classic mistakes are still extremely common (Subversion and GIT have largely solved #36 "lack of automated source-code control").



Update: 2021-08-17

Check out Steve McConnell's updated and improved list of "classic software mistakes."


Use memory-safe languages whenever possible

This suggestion didn't make it into either book but I think it's an important point in 2019. Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues.

It's pretty clear at this point that humans just aren't capable of programming large systems in memory-unsafe languages like C and C++ without making an astounding number of mistakes or spending embarrassing amounts of money to find and remove them.

If Microsoft can't keep those errors out of their software, it's unreasonable to think you can do any better. So, if you're starting a new project choose a memory-safe language. There are memory-safe languages for even the most demanding domains so don't let your concerns about the performance implications or compatibility issues stop you from checking them out.

Start improving your development practices

Okay. So, you now believe the chart is correct and you further believe you are to the left of the optimum. What now? Try to get your organization moving down the curve towards the optimal point on the chart, of course.

Your first step is to get buy-in to the fact that low quality is a problem on your project. Since quality is a problem on most projects it shouldn't be hard to come up with evidence to support your case. Chapter 2 of The Economics of Software Quality will help you setup a defect tracking system to track the right things and avoid common measurement pitfalls. Having hard data about the cost of low quality will help you make your case.

Your next step is to convince yourself and your team to stop taking shortcuts because they almost always backfire. Print out the chart and hang out on your walls if you have to.

Next, I suggest you implement a small change across your team, try it for a while, evaluate your results, keep or discard your change, and then choose something else to improve and repeat the cycle.

Not all ideas are equally helpful so I'd start with the suggestions in Software Quality at Top Speed. The three ideas in that blog post are bang on. Start by eliminating shortcuts, replacing error-prone modules, and adopting some kind of code review process. You're welcome to use my code review checklist as a starting point.

Then I'd move on to Rapid Development. This book is all about getting your project under control and delivering working software faster. The 36 classic mistakes to avoid are super important. Go there next, then tackle the topics in the remainder of the book.

The Economics of Software Quality isn't really a how-to book but it will come in handy as a reference. It will help you determine the optimal quality target for your particular project. And it offers you a menu of options to help you choose the right combination of practices to achieve it.

What if nobody's interested in improving quality?

Sadly, this is going to happen to many of you. It's hard to shake the myth that quality is expensive. And it's even harder to convince people to change how they work. Unless you are the top dog on your team you may not have the influence required to spearhead this kind of change.

So you have three options:

  1. Forget about improving quality at the project level and conform to the norms of your team.
  2. Continue advocating for quality improvements and hope that people will eventually agree with you.
  3. Find a new place to work that already cares about quality.

I can't decide for you but let me just say that there are many more developer positions available than qualified developers to fill them. And many employers do care about quality and are actively recruiting people at all levels of experience who can help them improve it.

Wrapping Up

Many of you are just as alarmed and distressed by the quality of the average software project as I am. We know how to do better but we just need to get the ball rolling on more software development teams. And I believe the first step is to show people this chart and the evidence behind it.

I know this problem isn't going to go away overnight. But if you feel the same way I do please do what you can to spread the word, share this post, improve the results in your own projects, and help improve the collective results of our industry.

Agree or disagree? Have a story to share? Let me have it in the comments.

Enjoy this post? Please "like" it below.

Top comments (48)

Collapse
 
sebbdk profile image
Sebastian Vargr

Most of the low quality projects I have worked on have had thoroughly demotivated devs.

Usually combined with lackluster management and sometimes a serious case of toxic dev culture.

I’ve become a nudger to try and counter, refactoring things, casually suggesting code improvements, and poking the culture with talks and genuine respect for the people going through change. There is always the risk of pissing someone off if you don’t weigh your words, so respect is key.

It’s hard to be consistent tho, especially if you repeatedly fail to get ideas across to a tough crowd...

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks for your comments, Sebastian.

Has your "nudger" approach been successful?

I'm also interested in hearing about any defect measuring systems on any of your projects. Did you measure things? Does measuring defects change anything?

Do projects that do measurements have better outcomes in your experience than projects that don't or projects that do a poor job with measurements?

Collapse
 
sebbdk profile image
Sebastian Vargr

It works in my immediate surroundings, i managed to get several old complex systems refactored by doing it, the business value here being increased application performance and feature output.

I remember distinctly one place where we took bugs quite serious. We had a hard limit of ~15 bugs of varying severeties, any more than that and bugs would get higher priority relative to new features.

It was quite efficient, after implementing it, we definitely had less bugs on production.

I'm not sure if it was the measuring of just the proper focus they received that did the thing, but it definitely did something.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

That's great. I'm glad it worked for you.

I've tried to be a "nudger" too and it definitely helped by my projects also suffered from some of the 36 classic mistakes so we never went as far and as fast as I would have liked.

Collapse
 
richardhowes profile image
richardhowes

Brilliant article!

We have been building a single application in PHP for 18 years so you can imagine the legacy code and issues we are dealing with. I was looking for insight into more effective development and found your brilliant article. Thank you!

We started long before frameworks or some of the more modern languages and are saddled with some challenges. Seems like just the other day we were the new cutting-edge solution learning from the faults of the old DOS-based solutions we replaced.

W know we have to disrupt ourselves now and I'm expecting to learn a lot more here. I already have found some gems of wisdom on this site. Thanks again!

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Your story is eerily similar to my own.

My team and I found Modernizing Legacy Applications in PHP by Paul M. Jones to be an invaluable resource.

There's also a section in The Economics of Software Quality that deals with the quantification of the cost of technical debt. You can use the model to put a dollar estimate on the cost of not fixing the technical debt in your project and use that to get the resources you need from management to improve your code base. It's the best approach to technical debt I've ever seen and it might be worth the price of the book itself if your are having trouble getting the staff you need to pay down your debt.

Thanks for reading and good luck with your project.

Collapse
 
richardhowes profile image
richardhowes

Thanks Blaine, interesting to find someone with the same history.

We are currently very profitable (in percentage terms although we are quite small) and have been for about 10 years. OUr problem is we know our development of new features is too slow and we have to solve that. The competition is close on our heels and are running faster than we are.

Thanks again for the article and the additional info. Much appreciated.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Yes, a new competitor without all our technical debt was a worry that kept me up at night. It never happened but I brought up the possibility frequently to keep us focused on improving.

Rapid Development is definitely the book for you. The classic mistakes kill productivity. Eliminate them if you can so you can move faster.

Thread Thread
 
richardhowes profile image
richardhowes

Will definitely order the book.

Even if we don't get overtaken by technical debt (love that term by the way, yours?) we develop new features at a frustratingly slow pace and low quality.

Thanks again!

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Just keep at it. It's a marathon, not a race.

"Technical debt"? Not my term. It's been around for years. I don't know who coined it.

Thread Thread
 
kjwierenga profile image
Klaas Jan Wierenga

The term "Technical Debt" was coined by Ward Cunningham.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Thanks for that. He coined it in 1992. Wow.

Collapse
 
dsesteban profile image
Dennis Pérez

This Is such a great article! Seriously, awesome!! Five stars!!

I have a question: do you think that, nowadays, using always tools(frameworks, for example) which have few years of life, Is a plus to developing low quality software?

Talking to a friend, an engineer from another industry(chemical), told me something that i consider really interesting:

"You, the developers, do something weird, from the engineer perspective: always using tools that just came out few years ago. In my area, we dont change anything(machinery, pieces, nothing) for something that haven't been tested for several years. Maybe because error in my area are more expensive than yours, maybe mistakes in software development are cheaper to solve and with minor consequences"

Do you think that industry should wait a little bit more before using all of these new frameworks?

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

The people in our industry are prone to adopting the new, trendy thing for all the wrong reasons. And I'm not taking just about frameworks. It also includes languages, methodologies, libraries, tools, management fads, and more. If it's new and it's got buzz we're trying it in production systems regardless of what our stakeholders think. Read You Should Build your Next App on a Boring Stack for more details.

If the stakeholders in most projects considering a new technology were consulted in a meaningful way most of them wouldn't want the risk of new and shiny. Most projects need risk reduction and schedule predictability more than they need innovation in their stack.

As a manager, I've shot down many proposals like this. The devs don't like it but I tell them that we are here to make money and meet management objectives and that they should experiment with any technology they want on their personal time.

I'm not saying that we never do anything new but my default answer of "no" to the shiny and new thing keeps us from changing without a good reason.

I think you've brought up another important dimension here. The engineers building a chemical processing plant have an expected lifetime in mind that's likely several decades. They are being paid to make sure the plant is maintainable for it's entire anticipated operating life. I've rarely met a software developer that even considers that.

On one project I worked on we had an explicit expected lifetime of 15-20 years. So every time we looked at adding a new library or tool we considered the probability that it would still be around in 20 years and what we would have to do if it wasn't. That consideration radically changed our decision making process--we became much more conservative.

Collapse
 
johannesjo profile image
Johannes Millan

I think engineering and Software development are very different areas, so it's hard to draw comparisons. Development cycles tend to be longer in the area of engineering (even though there are exceptions from this rule to be found, like rapid prototyping in engineering and maybe SAP for software). I think this has also to do with the scale and complexity of the projects and companies involved. Let's take a new car from a big manufacturer and a new app from a small Startup for example. The bigger the complexity of a project, the bigger the transaction costs of innovation. While every part of the car needs to work reliably in order to avoid huge costs down the line, a small software company often needs to be fast, the costs of an error are smaller, so there is more room and also a bigger need for innovation.

Collapse
 
pmneve profile image
Patrick M Neve • Edited

Have been fighting these battles for 35 years... Win some, lose most. While I currently specialize in test automation I've long championed various approaches to build quality into the project. Have seen all of the 36 mistakes and tried jawboning corrective action for each.

Am stilling hanging in there!

Superb article! Thanks!

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks, Patrick. Could I ask you to draw on your experience and share one or two tips with us to help devs in similar situations improve quality?

What's worked the best in your experience? Are there certain arguments or changes that have above average success rates in your experience?

Collapse
 
pmneve profile image
Patrick M Neve

Paying attention to user/customer needs and making sure they are understood helps a lot.
Not trying to do too much at one time (what I think agile really tries to address) also makes a difference.

Keeping iterations short and keeping the customers in the loop seem to be critical and cover a significant chunk of the infamous '35'.

And doing some 'design': thinking (and conversing) about what is to be delivered and about ways to do it before jumping into code. Can take ten minutes or a couple days, but is important.

If I keep going I'll rehash all 35. . .

Collapse
 
mbjelac profile image
Marko Bjelac

Excellent article!

I come to the same conclusion but from a slightly different angle (even including the developer-in-crappy-company problem): dev.to/mbjelac/stop-the-madness-14f6

An article I reference is Martin Fowler's Is High Quality Software Worth the Cost? which discusses the very same topic as your article.

As a TDD enthusiast I particularly found interesting your mention that testing (after development) is largely ineffective, contrasted to test-prevention techniques (TDD included) which are not.

Thanks for this great article!

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I read your links and I largely agree. I've seen lots of articles like Fowler's. They make their case with logic and appeals to common sense, which is okay but I wanted my article to be backed by data. It took a while to find these books and it took even longer to read them but I'm happy with the results.

Depending on exactly what you meant with your TDD comment, I don't think the authors of either book would readily agree. If you have a low quality development process on two teams and one team does TDD and the other team writes their tests after they write their code, you probably won't see much difference in the quality of the final product. It will be low in both cases. After all TDD won't catch requirements errors, which account for about 60% of the delivered defects in most projects.

But TDD could be a beneficial component of an overall quality program that stresses defect prevention and early defect detection and removal. That kind of development strategy will likely produce higher quality software than a team that focuses on a more informal development process and then hopes to find and fix defects to drive quality into their project at the end.

Cheers.

Collapse
 
mbjelac profile image
Marko Bjelac

I agree with your view that TDD alone isn't enough. I suppose that by TDD you mean unit (micro) level TDD.

I use the term more broadly to include the BDD process - a form of requirements gathering which ensures (at least) that:

  • requirements are properly analised and backed by examples
  • every feature has an automated "done" flag which devs can use to know when the feature is implemented
Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I'd assume micro-level TDD as well but the authors didn't specify. BDD existed in 2011 (when the book was published) but just barely and it wasn't very popular yet.

Collapse
 
johannesjo profile image
Johannes Millan

Even though I was pushed back by the clickbaity title, I must admit tat this was a very interesting read. Throughout the years I've experienced a lot of the problems mentioned again and again. The bigger the project the more important it gets to have good conventions about which hopefully everybody is on board, automated tools to assist you on the way, and processes to share knowledge and to allow for reflection and improvement. And while it might work out on a smaller scale, rushing things on big projects is the worst as it tends to backfire many times over.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Yes, Johannes, the authors of The Economics of Software Quality repeatedly make the point that larger projects are disproportionately more difficult to manage successfully than smaller projects.

Why do you think people resisted adopting enough process to ensure success on larger projects you've worked on?

Collapse
 
integerman profile image
Matt Eland

The chart uses black graphics and a transparent background making it difficult to read on dark themed Dev.

Collapse
 
ashleemboyer profile image
Ashlee (she/her)

Was also about to comment this. Can a white background be added to the image?

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Ah, I didn't think of that. I'll fix it.

Thread Thread
 
integerman profile image
Matt Eland

Dark theme = best theme. Thanks much. I've added it to my list to read later so I'll check back later.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Fixed.

Thread Thread
 
ashleemboyer profile image
Ashlee (she/her)

Thanks!!

Collapse
 
lewiscowles1986 profile image
Lewis Cowles

One of the biggest problems with this is that it centres around a single keystone in-depth study. It was an interesting choice but I'm not sure I could give it to our product & eng teams to improve their work.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I'd agree with you if that were true but it's not.

I chose to focus this post narrowly because it's such a big topic and it seemed like a good idea to focus on the strongest evidence. But there are many, many studies backing up this conclusion from many different angles.

Rapid Development contains extensive footnotes so you can look up all those studies.

But you don't have to take anybody's word for anything. Every team should probably be measuring their ROI (results divided by effort) in whatever way is meaningful to them and doing experiments to improve it given how costly software development is.

I'd be shocked if you found your team is more efficient/economically productive if you focused less on defect prevention, pretest defect removal, and the 36 classic mistakes and more on testing, debugging, and rework. But I suppose it's possible.

Cheers.

Collapse
 
lewiscowles1986 profile image
Lewis Cowles

I'd be shocked if you found your team is more efficient/economically productive if you focused less on defect prevention, pretest defect removal

Whoa there... I never said that, I said this write-up focused on a single keystone.

I did not state if it was wrong or right, just that the article itself doesn't provide dev teams the tools to reduce.

FWIW it was merely a call out to think about action items, not down your work or the concept of the article.

I think other works such as boringtechnology.club/ offer more clearly actionable alternatives which focus.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Apologies, Lewis. I didn't mean to put words in your mouth. It's easy to misinterpret blog comments.

I read your link and I agree with the author's conclusions on the benefits and costs of adding technologies to your stack. But I'm not sure how that relates to the topic of my post. Can you help me understand the connection?

Collapse
 
mememe profile image
mememe

A long read. I will be going back to this post from time to time. It is sad that:

Forget about improving quality at the project level and conform to the norms of your team

is an option for some of us.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I know. It pained me to write that. But it's true.

You wouldn't believe how often I've interviewed people who have told me that their project managers wouldn't let them write tests, do code reviews, or engage in other reasonable QA activities--because they're a "waste of time."

Collapse
 
danielrogowski profile image
danielrogowski • Edited

Linux is a good example a memory-unsafe language doesn't automatically lead to disaster. I think the issues with Microsoft Windows exist because of corporate policies and many bad practices enforced by the management. If you read blogs written by ex Microsoft devs you know what I mean.

I saw many open source projects implemented in C or C++ which sport quite a high quality level.

That said, this article is the single most useful one I ever read on the subject of software development! (I'm in the industry for 20 years now and currently working as an IT-trainer.)

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I didn't say memory-unsafe languages automatically lead to disaster. But all things being equal, the exact same system developed in a memory-safe language will likely have fewer bugs than the same system developed in a memory-unsafe language.

Thanks for reading.