DEV Community

Joost Visser
Joost Visser

Posted on • Originally published at hackernoon.com on

Are we getting better at software development?

Our benchmark of development best practices shows piece-meal improvement

Following best practices or typing away blindly?

Deployment automation is quickly gaining ground. Backlog grooming is being perfected further. Code quality control and automated testing are improving, but still not fully adopted by most teams. These are just 3 of the lessons we learned from the yearly re-calibration of our development best practices benchmark.

Read on for some interesting stats and what they tell us about where software development is heading. Bonus at the end: a quick self-assessment to benchmark your own team.

What are development best practices?

For over 15 years, my colleagues and I at the Software Improvement Group (SIG) have been in the business of evaluating the quality of code, architecture, and development practices for our customers.

We started out with just giving an expert judgement, based on what we observed and what our own software-development gut told us.

As we collected more observations and gained more experience, we decided to whip out our scientific skills (🎓) and build a structured evaluation model. The model collects a series of checks on which development practices are applied by a given team, but also sets thresholds for when teams are applying those practices fully, partially, or not at all. We have now used this model for a number of years to provide teams with objective feedback and comparisons against peers.

Ten best practices for effective software development. We described our structured evaluation model for software development best practices at length in the @OReillyMedia book “Building Software Teams”.

Annual updates of the evaluation model

About once per year, we update our benchmark repository with the observations and evaluations we have collected throughout the past year and use this additional data to adjust the model.

This annual update is also an excellent moment to study the data and look for trends. Are we getting better at software development? We just finished the latest calibration, and here is what we learned about trends in development best practices.

Lesson #1: More teams apply deployment automation

We measure the practice of deployment automation by looking at whether teams have a deployment process in place that is quick, repeatable, and preferably fully automated.

For example, a team that deploys each new release with a single push of a button would receive a perfect score on this practice, codified as fully applied. But a team that needs to go through a limited number of well-documented manual steps would be scored as partially applied.

More teams fully apply deployment automation (43%, was 30%) and fewer teams do not apply deployment automation at all (11%, was 26%).

As the chart shows, more teams are applying deployment automation practices fully (43%, was 30%) and fewer teams do not apply deployment automation at all (11%, was 26%).

This is a significant positive trend that is mirrored by the trends in continuous integration (automatic compilation and testing after each change) and continuous delivery (automatic deployment after each change), as shown in the following charts.

Full or partial adoption of continuous integration (currently 68%) has significantly improved, but still lags behind compared to deployment automation (currently 89%). For continuous delivery, adoption has also improved significantly, but still has a long way to go (currently 29%).

Though the trends for these two practices are equally positive, their adoption still lags behind. And especially for continuous delivery, the great majority of teams (and the organisations of which they are part) still have a long way to go.

Lesson #2: Almost all teams groom their backlogs

Nearly all teams (95%, was 92%) maintain product and sprint backlogs and a significantly larger portion of teams applies this best practice fully (80%, was 71%).

The best practice of backlog grooming already enjoyed high adoption, with 71% of the teams diligently maintaining both Product and Sprint backlogs, and 22% doing so at least partially. As teams perfected their backlog grooming, full adoption increased to 80%. Only a small percentage of teams (5%, down from 8%) does not do any backlog grooming at all.

In fact, most Agile-Scrum best practices that we assess showed improvement, or stable high adoption. With one small exception:

More teams do not stick to the discipline of holding all meetings prescribed by Scrum (15%, up from 11%).

As the chart shows, fewer teams seem to stick to the discipline of holding meetings prescribed by Scrum (planning, daily standup, review, retrospective). This may not be a bad thing per se, as more experienced teams are encouraged to adapt their meeting rhythms to their own needs.

Lesson #3: Code quality control and testing are improving

Fewer teams are failing to enforce consistent code quality standards (20%, down from 25%). Fewer teams fail to run automated test at each commit (41%, down from 48%).

Quality control is an essential part of professional software development. Nevertheless, the best practices of code quality control and automated testing are still only fully applied by a minority of the teams.

To assess code quality control, we observe whether a team works with clear coding standards, systematically reviews code (against these standards and against principles of good design), and whether automated code quality checks are performed. Full adoption of code quality control has somewhat increased (31%, up from 23%), but still 20% of teams are producing code without adequate quality control in place.

To assess testing practices, we observe whether teams have an automated regression test suite that is being executed consistently after each change. Full adoption of this practice is increasing (33%, up from 29%). The percentage of teams that change their code without proper regression testing is declining rapidly but is still a staggering 31% (down from 48%).

Getting better?

So our question: Are we getting better at software development? can be answered: Yes, but at a modest pace.

For some practices, the needle doesn’t seem to have moved too much over the past year (e.g. documenting just enough, managing third-party components, using SMART requirements and goal-oriented metrics). I won’t bore you with the flat-lining charts.

We do see significant progress on a number of practices, especially deployment automation, continuous integration, code quality control, and automated testing. This is incredibly good news!

But we’re not there yet. Personally, I’m a bit shocked that less than 1 in 3 software development teams follow quality and testing best practices, since adopting these best practices can bring immediate benefits with limited effort.

Less than 1 in 3 software development teams follow quality and testing best practices

What can you do?

  • Assess your own team: If you’d like to learn more or would like to have your team assessed by us, drop me a line. To do a quick self-assessment, you can take this survey
  • Stay up to date on software quality trends : If you are interested in learning more about longer-term trends of more software quality aspects, you are in luck. A 10-year retrospective of software quality by SIG is forthcoming. Follow me here on Medium or on twitter if you want to be notified when it comes out.

Joost Visser is CTO at the Software Improvement Group (SIG), Professor of Large-scale Software Systems at Radboud University, author of O’Reilly books Building Maintainable Software and Building Software Teams , and leading the Better Code Hub team at SIG.

Thanks to Lodewijk Bergmans for crunching and charting data!


Top comments (1)

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I devoured both your O'Reilly books and the papers you co-authored and mentioned in the appendix of Building Maintainable Software. This is really fantastic work, Joost. My team and I have been struggling with the questions at the heart of your work.

Your suggestions about how to achieve better maintainability make intuitive sense. But I'm a little fuzzy on your evidence.

In Building Maintainable Software Chapter 1, page 8 of the Java version you wrote:

The star ratings serve as a predictor for actual system maintainability. SIG has collected empirical evidence that issue resolution and enhancements are twice as fast in systems with 4 stars than in systems with 2 stars.

What's the empirical evidence? Are you talking about your paper "Faster issue resolution with higher technical quality of software" or something else? Do you have additional evidence you could share? One data point isn't much to go on. I don't want to blindly follow anybody's advice and the paper is--no disrespect--a little shaky because of the quality of the source data.

I think you'd agree that refactoring an existing 2 star 50 KLOC project into 4 star project would be a significant undertaking. So how do you convince clients that following your recommendations is the right thing to do?

Let's look at the short units of code recommendation. In Code Complete Steve McConnell cites several research studies and comes to the conclusion that:

The evidence in favor of short routines, however, is very thin, and the evidence in favor of longer routines is compelling.

How do you know I'm not better off limiting my units of code to 100, 150, or 200 lines like he suggests? Why is 15 the right limit?

I could ask the same things for the other measures. Why not have a McCabe complexity cutoff at 6 or 7 or 10?

I guess my questions boil down to: how do you know your suggestions are anywhere near the economic optimum for a non-trivial software project? No doubt, following all your suggestions will improve the maintainability of my project but getting there will be expensive. I'm not saying you're wrong, I just don't know how you know your suggestions are economically sound from reading your books and papers. What have I missed?

Cheers.