DEV Community


Posted on

anyone figured out a way to measure developer productivity?

I personally like to measure everything. I find that when I measure something it tends to improve.

Programming, however, is not an easily measurable activity.

A sales person can easily be measured by the amount of merchandise they sold. Similarly, a potter can be measured by the amount of pots they produced etc.

But in programming, often less is more. Writing a function using 25 lines of code is typically better than a function that does the equivalent using 250 LOC. Reusing an existing functionality to fulfill a new feature is far better than writing it from scratch. Etc.

So my question is, did anyone figure out a good way to measure a developer's productivity?

Oldest comments (7)

rhymes profile image

Probably the best way you can measure productivy is the output not the lines of code (which are super dependent on the language used and the amount of abstractions).

So you can measure output (the presence and quality of functionalities), you can measure speed (how much time did it take) altough speed without quality is probably not that useful :D and you can probably measure the quality of the code itself (it the output is of quality but the code base is really bad it might take too much to implement the second feature after the first one).

What do you think?

sudiukil profile image
Quentin Sonrel • Edited

The first real question is: what is productivity ? How productive you are and how you measure it depends on what your goals are. If I take your potter example: yes, the number of pots can be a way to measure productivity... but what if the goal of the potter is not to make a lot of average pots but a smaller quantity of high-end deluxe pots ? In that case, the number of pots becomes quite irrelevant for measuring productivity.

Back to developers and code now. What are the things you can measure about produced code ?

  • How quick it was produced ? Yeah sure, but quick code is rarely, if not never, good code.
  • How good it is ? That's sound like a better indicator, although... how do you value the quality of a code ? First of all, while anyone can count pots, only a developer can say whether or not a code is good... and guess what, code is not just a product, it's also a logic, and every developer has its own... so, while one developer might find a code "good", another might consider it utter garbage.
  • How does it match the expected result ? It's another thing you can measure, but awful code can match expectations perfectly... until you realize the code is awful and can't be maintained, or reused, etc... which may result in more work... would you consider that productive ?

So, just like our potter, you will have to measure productivity depending on what the goals are. If you need something to work asap, you may measure the speed. If you want something that "just works", you may want to measure output (the end result). And if you want "good" code, you may measure its quality, but it will remain subjective.

And depending on what you favor (speed, output or quality), there will be drawbacks and consequences.

So maybe the only way to measure productivity is to take all of these things into account, but it's back to square one: it's not easy to measure. But hey, coding is a complex thing, so maybe measuring a developer productivity should be too ;)

dmfay profile image
Dian Fay

Some things are just qualitative. Metrics can tell you how much is being done in quantitative terms but it's never the full picture. Lines of code per hour is worthless; commits per day isn't much better; sprint velocity is heavily dependent on externalities (appropriate ticket scope, consistency, developer availability, difficulty of working with the codebase) so it really only works as a relative measure.

The only way to really assess productivity is to see if you can cash the checks you write: does this thing do what we want it to do, does it do it with reasonable efficiency and few bugs, did we get it done within the time we said we were going to get it done? If so, yay, we're adequately productive.

fnh profile image
Fabian Holzer

Some older colleagues have told me a few stories on what happened, when some managers thought it was a good idea to couple the performance review goals with quantitative metrics: they straight away created a perverse incentive. If I remember correctly, it was something in the line of "number of fixed bugs". No, nobody deliberately introduced new defects because of it, but it turned out, fixing typos, doing cosmetic adjustments etc.pp. suddenly got much more attention than it would have been necessary, especially in comparison to issues, which would consume the better part of a work week and actually provide much more value.

But morale tales aside: you're not a manager and you like to measure things and gain some insights in your productivity. So for starters, how about this suggestiong: try to measure both the capabilities and the quality of your system at discrete points in time. If between two measurements both got better, or one increased and the other stayed the same: congratulations, you were productive - either by adding features or maintining your systems' health.

Measuring the capabilities is hard. Maybe you have something like stories with story points assigned to it, or maybe you default to lines of code.

Measuring quality is - in my opinion - more promising and interesting.

Quite a few books have been written on it. One that I found particularly interesting, and think that you might find it inspiring for your purposes as well, was "Your code as a crime scene" by Adam Tornhill. The author shows a lot of cool techniques to get quality metrics (e.g. on coupling, churn, stability) for software systems from their version control history. He also has developed a tool called code-maat to do much of the heavy lifting for gathering the data, which you might find useful independent from the book:

sgarza profile image
Sergio de la Garza

Don't measure lines of code period.

First thing you must self-measure is "Bugs per {feature|bug fix}".

Does fixing a bug introduced another bug or worst several bug? Same for features.

Having this metric result is not done in real time because you have to wait for QA or real users to report them. But have a list in a spreadsheet of the tasks you have completed with the name in one column and 2 other columns one for "End user reported bug count" and other for "QA or self-test reported bugs"

With this you will know the negative impact of your work and if the numbers are high you should be doing something about it like writing specifications first or making sure you're solving the right problem.

wbharding profile image
Bill Harding • Edited

Looks like I'm about 8 months late to this party, but, having recently sunk 50 hours into writing up what I hope will be a canonical guide to this very topic, figured I'd chime in. It lives here

Since it's a comprehensive guide (i.e., long), here's the tl; dr

  1. "Developer productivity" can't be measured perfectly, because it includes subjective aspects like e.g., whether the developer is working on the task they were assigned. A true assessment of developer productivity requires a manager's eye.

  2. "Developer output" can be measured, and there are three credible products that have launched in the past few years to assist in this vein. The logistics of how this is possible are more complex than will fit in a tl;dr, but basically if you strip away the 95% of commit activity that doesn't correspond to cognitive load, then you end up with a usable metric.

As you may deduce from the fact I spend 50 hours writing articles about it, this is a topic I'm deeply passionate about, so would be happy to chat with folks that have detailed questions about methodology (bill -at-

alexcircei profile image
Alex Circei

We wrote here how we measure productivity at Waydev