DEV Community

Cover image for Want to write defect-free software? Learn the Personal Software Process

Want to write defect-free software? Learn the Personal Software Process

Blaine Osepchuk on September 03, 2018

I'm on a journey to become a better software developer by reducing the number of defects in my code. The Personal Software Process (PSP) is one of ...
Collapse
 
ben profile image
Ben Halpern

Wow! I don't think I'm going to read the book but I'm incredibly thankful I read this post.

Really great practical summary and review.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You're welcome, Ben.

Does any of that stuff ring true for you?

Do you have any idea what percentage of your time you spend on rework caused by defects or missing requirements?

Collapse
 
ben profile image
Ben Halpern

Me and my team definitely spend a good deal of time on re-work, but we also don't want to be bogged down by perfection.

So I don't think I have a good answer for you, but the wrong types of defects definitely keep lingering on and on and on.

Going open source was an effort in mitigating many of the worst case scenarios this method is trying to avoid. All our crappiest, moldiest code gets some fresh air to wipe out defects.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk • Edited

Me and my team definitely spend a good deal of time on re-work, but we also don't want to be bogged down by perfection.

I'm genuinely interested in this idea of "not wanting to get bogged down by perfection" (I'm not trying to pick a fight).

I hear that a lot in our field but it seems to go against everything Steve McConnell talks about in Quality at Top Speed. And he's drawing his conclusions from some pretty good research. Humphrey makes an even stronger case with the research he cites in the quality chapter of the PSP book.

At my work, we've accepted that higher quality goes hand-in-hand with shorter schedules but just like you we don't have good data to back that up.

So, it feels like we're going slow but once code gets into production we rarely need to change it. And you guys feel like you're going fast but you "spend a good deal of time on re-work." And neither of our teams know where our optimum is.

Do you have reason to believe that McConnell and Humphrey misinterpreted the research or that the research is wrong? Or do you accept the research but believe that getting functionality in front of users as soon as possible is more important for your team and you're willing to be less efficient to do that (sort of an MVP approach)?

Thread Thread
 
ben profile image
Ben Halpern

I feel like I'm thinking more in terms of mitigating perfectionism when it is not needed. Might be more of a reflection of my team, but everyone needs more encouragement to ship than fear that they will be sloppy.

We still have code review and have good standards, but I've found I need to encourage shipping above some other qualities. That being said, I've talked to folks at conferences who seem to work in teams that have such a disregard for quality of work and quality of software that I don't think we're anywhere close to that end of the spectrum.

Thread Thread
 
ben profile image
Ben Halpern

And yeah, getting things in front of users is often a necessarily element. We have been burned before by trying to get usability just right before just getting things out there. You really learn so much more once it's out there.

What you can't do is move away to a new project after you've shipped the MVP. Shipping the thing that is supposed to be MVP and then moving onto other things is a big mistake. It means you end up shipping a bunch of half-done features and everything is low quality. MVP is step one, not ever the final step.

We have several of these perma-MVPs live on dev.to at the moment that we need to find the time to improve. Just today I shipped a fix to a bug which has been nagging me since about day one on the project. It was just unimportant enough to completely ignore for a long time even though it forced workarounds which were more complicated than the fix itself ended up being.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Thanks, Ben. I really like your answers.

I see now that you meant actual perfectionism and not "perfectionism" (which is sometimes code for not wanting people to do anything beyond getting a clean compile).

It just goes to show that teams and projects are different. I'm always trying to raise the bar, keep the code clean, write more tests, refactor, etc.

I've been burned too many times by releasing MVP-quality code and then something comes up and I'm unable to go back. So I've had to change my strategy. Instead of building the whole feature as an MVP, we build the essential part of the feature as production-quality code but don't build the back-end GUI or reports at all. If the feature fails we rip out the code and if it succeeds we build the back-end.

The business people never want us to go back and harden an MVP but they always think reports are a high priority.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

I cast doubt on anything that uses the spacious term "defect" to measure progress. It's one of those metrics which is easy to manipulate once you start tracking it. Add in the nonsensical KLOC and you have numbers that can be made dance to whatever tune you want.

Additionally, the number of defects one finds is strongly related to the quality of testers one has. I don't care how awesome of a programmer you are, give it to the right testers and the issues reports will flow in.

Phrases like "PSP is too hard to follow for 99.9% of software developers" are condescending. If 99.9% of programmer's can't follow a process then it's something seriously wrong with the process and there's no point looking into it at all.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

I have concerns about tracking "defects" and "LOC" too. I expressed some in my post and I held others back because the post was getting too long.

Humphrey was concerned about them as well. He knew these metrics are very open to manipulation and cautioned against using them in any way for rewards, punishments, promotion, or to compare programmers. They are solely intended for the individual programmer to track his or her progress, which should significantly reduce the incentive to manipulate them.

He also talks about choosing alternative measures if LOC doesn't suit your situation. But for most projects, LOC is probably the best measure, even if it's a crappy one.

The number of defects you find is strongly related to both the number of defects in the product and the quality and effort of your testers. That's true. Your PSP stats are only applicable to a single language on a single project. So for most projects over a reasonable amount of time, the tester efficacy should be relatively constant.

I really have trouble with these proxy measures and you've pointed out a bunch of potential problems. So the OCD side of me says we shouldn't use them if they are flawed.

On the other hand, I have very little insight into where I make my errors, what kinds of errors I make over and over again, how productive I am on any given week, my ratio of writing code to writing tests, and how that effects the defect rates I discover in my code, etc. etc..

So where does that leave us? I don't think the measures need to be perfect to help us find ways to improve, which is the whole point of PSP.

PSP is too hard to follow for 99.9% of software developers.

I stand by that statement. If I just give the book to 1,000 developers and say "do this from now on", maybe a couple will be doing it a year from now. The barriers to an individual adopting PSP are formidable. But everything lives in a context. PSP is meant to be practiced in an organization that's doing TSP. That means programmers practicing PSP get a coach, training, support, the time to actually do it, incentives aligned with producing quality software, etc.. In that context, most of them will probably be doing PSP at the end of the year.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

I have to say that this type of very complicated process strikes me as inherently problematic. In my experience, reducing defects is actually a fairly simple process consisting of two main steps:

  1. Formal Analysis: If an algorithm is mathematical in nature, then it is amenable to a formal mathematical analysis. Cryptography is a nice clear case in point. If I were to come up with a new algorithm for encryption, I'd write a paper describing the algorithm and I'd provide proofs of its characteristics. Then I'd publish the paper and offer it up for peer review and public scrutiny. Once the paper passed that step, then presumably the algorithm would be eligible to be implemented in actual code. Of course someone clever might find a fundamental loophole in the algorithm (as opposed to any specific code) later on, but that's true of everything. For example, crypto algorithms that rely on factoring large numbers into primes are based on the assumption that P != NP, which most experts believe to be true, but no one has been able to prove one way or the other. If someone finds a counterexample, such algorithms will become instantly obsolete. One can think of other situations where this applies. For example, let's say we have a system where multiple threads communicate in real time. Formally determining that the system will perform as expected given various starting assumptions is probably a good idea. I don't think one wants to hack something like that together piecemeal. ;)

  2. Testing: If an algorithm or design is based on somewhat arbitrary business rules, then I would say up-front analysis is probably not very effective. In this kind of case, I think its effectiveness tends to drop off exponentially with time. If you have spent several days as a team trying to think of issues with a particular design, spending an additional month is unlikely to be the most efficient use of time. Instead, in such cases I recommend very thorough testing. We can test individual modules in isolation; we can develop end-to-end tests; we can do property-based testing; we can run performance tests; and so on... I find that thorough and careful testing will uncover problems much more quickly than just staring at a document and trying to think really hard about what may be wrong with it. So for business logic, I recommend trying to get something up and running as quickly as possible and then using all the applicable testing techniques to find issues with it. Then feed any problems that are found back into the design process. This may mean doing some significant refactoring of the design to fix an issue, or it may simply mean adding edge cases ad-hoc to the existing logic. The team has to decide what is appropriate. Even if this takes time, it's still more efficient than spending that time on analyzing a design that exists on paper.

I really don't believe anything more can be done to reduce defects in a general sense. Sometimes one can simplify things. For example, one can use a domain-specific language that isn't Turing-complete to make some potential problems go away. Another option is to use an off-the-shelf framework and plug one’s application-specific logic into it.

However, if one is doing normal programming in a general-purpose language, I believe one must simply rely on the above two methods as appropriate (and hire the best people one can find, needless to say).

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Formal methods, yes. I 100% agree. If you can use formal methods to prove the properties of some algorithm, go for it.

However, testing is a super-expensive way to find defects. That's sort of the whole point of PSP: if you can reduce the number of defects you inject and find and remove the defects you do inject as soon as possible, you can drastically reduce the effort it takes to deliver high quality software. Finding a defect through testing, debugging it, fixing it, and then retesting is expensive.

The actual techniques Humphrey recommends aren't radical: collect requirements, review requirements, make a design, review your design, implement in code, personally review your code, peer review your code (or do an inspection), and then if you follow all those steps, you should have a lot fewer defects that need to be found and fixed during testing.

The part that's hard for people to swallow is all the tracking and analysis Humphrey wants you to do to prove that you're getting better over time. But you don't have to do that. You could just take it on faith that code reviews are a good idea, do them, and hope for the best.

We do that with all kinds of stuff in programming. For example, have you proven that pair programming is effective in your organization? If you have, you would be one of the few people who has gone to the trouble. The research is equivocal.

Let me quote straight from Humphrey (page 142 of the PSP book):

Data for several thousand programs show that even experienced developers inject 100 or more defects per KLOC. Typically, about half of these defects are found during compiling and the rest must be found in testing. Based on these data, a 50,000-LOC product would start with about 5,000 defects, and it would enter testing with about 50 or more defects per KLOC, or about 2,500 defects. Again, about half of these defects would be found in unit testing at a rate of about two or three defects per hour, and the rest must be found in system-level testing at a cost of 10 to 20 or more hours each. Total testing times would then be about 13,000 to 25,000 hours. With roughly 2,000 working hours in a year, this would require 12 to 18 or more months of testing by a team of five developers.

With the PSP, you will use design reviews and code reviews to find defects before testing. Then, on a TSP team, you will also use inspections. When they do proper reviews and inspections, TSP teams typically find close to 99% of the defects before even starting system-level testing. Test times are then cut by ten times or more. Instead of spending months in testing, TSP teams need only a few weeks (Davis and Mullaney 2003).

Testing just isn't a very efficient way to ensure the quality of your software. There's lots of research to back that up. Unless your software is safety-critical, it is very unlikely that you have anything near 100% statement coverage, never mind the modified condition/decision coverage (MC/DC) you'd need to be able to make any claims about quality.

So, I'm not sure where you're coming from on that, Nested Software. Or have I misinterpreted your remarks (as happens from time to time)?

Collapse
 
nestedsoftware profile image
Nested Software • Edited

@bosepchuk , the part I disagree on, at least if there was no misunderstanding, is that basically every line of code written by everyone on the team should be subjected to analysis and code review. That's the impression I got about PSP, but maybe I got the wrong idea. I think that would be significantly less efficient than a more conventional approach combined with testing (unit testing and other methods), as well as adding some amount of analysis and code review where it matters most; and possible addition of things like pair programming into the mix as well.

As far as research goes, while I totally support research into software process, I think we must be skeptical of its actual applicability, given the current state of the art. To me, the fundamental problem is that if you do research in a very "clean" academic setting, there is a natural objection of "that's nice, but does it apply to the real world?" Then if you try to do research with real projects and organizations, there's the equally natural objection of "there are thousands of uncontrolled parameters here that could be polluting the results!" At the end of the day, I wouldn't be inclined to deeply trust any research into software process - even if I 100% agreed with its conclusions based on my own experience and intuition.

That said, I think there are probably many things we can agree on, even though our viewpoints may be somewhat divergent here:

  • When we write code as developers, we should be thinking hard about ways our code could go wrong, and we should try to generate automated tests for these possibilities as much as possible/reasonable. The idea here of course is to make sure the code works as expected today, but will also continue to work in regression.

  • As we write code, we should be thinking both about the immediate functionality we're developing, but we should also consider the bigger picture and make sure things we're doing fit into the broader context of the application logic being developed.

  • Practices like code and design review, as well as "real time" versions thereof like pair programming, can help to improve the quality of code, clean up weaknesses or bad habits, and foster communication among team members.

One thing I wonder about is whether the PSP really takes into account the expectations that people have of software today. More than ever software is a living, breathing thing. Many applications are online, and users want both bug fixes and features to be added more or less continuously. It's a far cry from the world of 1980s programming that PSP seems to have come out of in that sense...

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Good points, Nested Software.

I think I need to clarify one point. Humphrey is actually encouraging us to follow a process, track its effectiveness, and do experiments to improve it over time. He makes it clear that the process he describes in the book, is what works for him and is just a convenient starting place to have a discussion. But it's up to each programmer to find and do what works for them.

If you wanted to follow Humphrey's process, then he would agree with you idea that "every line of code written by everyone on the team should be subjected to analysis and code review."

We do that on my team and we find it catches a crazy number of problems before we put code in production. We also work in a legacy environment where much of the existing code cannot be reasonably covered with automated tests. So your mileage may vary.

Yes, research is a huge problem in our field for all the reasons you mentioned and more. Humphrey would encourage you to run your own experiments and figure out what works for you.

I absolutely agree with your three points.

I don't see any reason PSP would have trouble coping with the expectations of today's software. Humphrey was very much in favor of small changes.

Just out of curiosity, what kind of programming do you do?

Cheers.

Thread Thread
 
nestedsoftware profile image
Nested Software • Edited

I find that testing gets more expensive the further away it is from the developer. If a separate qa group finds issues, that's going to take more time and money to solve. If an end-user finds issues, that's going to be worse again. However, I believe strongly in developers taking ownership of thoroughly testing their work before it ever goes on to these other stages. That includes both automated tests of whatever variety are relevant as well as just manually working with the software. As developers we should be going through workflows manually on our own and trying to break our own software. That's actually a very fast way to find many issues that are likely to occur (when I find a problem like that, I make every effort to automate a test for it first so that it stays fixed in regression).

I personally find that monkeying around with an application like this is very efficient because it decouples one from thinking in terms of the code, and just focuses attention on the actual behaviour of the software. So I think this kind of developer testing is an efficient way to improve quality. It doesn't mean a QA team isn't needed, but hopefully it means the bugs they find will really be of a different order - things that are hard to reproduce; complex interactions or integrations of the technologies involved; or running the software in a bunch of different environments. In my experience, it's really not hard to miss problems when just looking at code, or looking at a design on paper.

I've mainly worked in enterprise software as well as full stack web development, along with some dabbling in embedded systems.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I 100% agree with your comments.

Lots of people have trouble "switching hats" from developer to tester to find problems with their own code. Going full black box is quite effective in that regard.

Collapse
 
kurisutofu profile image
kurisutofu • Edited

I will try to read the book but as pointed in the article, I don't think it's easy to implement.

I'm developing tools on my own and while it is tempting to implement PSP, the time required for analysis will be hard to find.

I hope this picks up steam and tools will be integrated to give automatic analysis (node.js framwework etc?).

Thanks a for taking the time to write the article!

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You're welcome.

I'm curious. What kind of tools are you building? What do they do?

Collapse
 
kurisutofu profile image
kurisutofu • Edited

Hi,

Nothing too exciting, basically kind of CMS tools for the office, hopefully in Node.JS :)

Collapse
 
robencom profile image
robencom

I am happy to stumble over this post JUST a day after I started thinking of ways to improve my whole "Programming Process".

My whole idea was revolving around the ability to dissect an application (that you need to write) into smaller parts (components) and give each part your 100%. Then at the end, you can glue all the parts together.

For each component, you need to make experiments, of what you need to do/know to make them bullet-proof! What I mean is, you need to make a component as good as it can get, by using the BEST practices. Each component should be base on a whole lot of science that you need to acquire.

For example, you can create a database in a very simple way, with no or few constraints. You can also build the same database with a lot of constraints, assuring integrity of data.

This is a tough one. We all need to work on it by ourselves, because, as you said, no one can afford to have a coach. But the result of doing this would be awesome!

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You're welcome, robencom.

It sounds a little like you're talking about the concept of 'bounded contexts' in domain driving design (DDD). Have you looked into that at all?

Collapse
 
robencom profile image
robencom

I will now. Thanks!

Collapse
 
jdsteinhauser profile image
Jason Steinhauser

I actually had a grad level class that used PSP. It was quite tedious, but the big takeaway for me was taking the time to do up front design, and refer to that design throughout. Like you, I can't really recommend full use of PSP, but it helped engrain design discipline into every project I've worked since then.

Collapse
 
erikpischel profile image
Erik Pischel

I read about PSP a couple of years ago. I would like to have an updated version for agile processes.

The idea for "build quality in" is nothing new (not even in 2005), for instance it is part of lean and eXtreme Programming. Pair programming, TDD and code reviews are examples for practices.

About halve of the time I measure how long I need for a task (including interuptions) so I can compare it with my estimate. I then log this in an xls sheet. Now I can figure out for a new task that I estimate will take X hours it might really take Y hours (or better: with a confidence of let's say 75% it will take Y hours or less).

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You are correct, 'building quality in' is not a new idea.

I referenced Humphrey's most recent book on PSP in this post but his first book on PSP is titled "A Discipline for Software Engineering" and it came out in 1995. And as you mentioned, quality thinking goes back a long way. I even studied Deming's work in university.

Have you learned anything interesting from your time tracking?

Collapse
 
eljayadobe profile image
Eljay-Adobe

Serendipity. My coworker and I were discussing how stressful it was to be a software developer.

I had pointed out that being a doctor, or police officer, or politician, or pretty much any other profession was far more stressful than being a software developer. (In general. I believe supported by stress reports by profession substantiate that software development is amongst the lowest stress professions. Ripley's Believe It Or Not.)

The thing that we had noted was that software developers take on stress. We are invested. That's not a bad thing... unless it becomes harmful.

So please be aware of when our own predilections become harmful. Ease up on the throttle.

PSP, in moderation.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I really don't know how stressful our profession is compared to others. The CDC puts computer programmers at #8 for suicides. So we are between police officers (#6) and doctors (#12).

Source: ctvnews.ca/health/suicide-rates-hi...

The statistics aren't broken out by gender so that could be skewing things because we know men kill themselves three times more often than women and our profession is overwhelmingly skewed towards males.

My guess is that lots of the stress in our profession is caused by unrealistic expectations (both internal and external).

So, yes, I agree, don't stress yourself out over PSP or anything else if you can help it.

Collapse
 
tux0r profile image
tux0r

Want to write defect-free software?

You never will. There is no defect-free software. Ever.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Maybe, but there are teams producing software with 100 x fewer defects than the average. There's plenty of room for improvement.