My recent entry on measuring a software engineer’s performance led to an interesting comment: but if you rely only on judgement, isn’t that a recipe for bias?
It’s a really good question. We want to be as objective as possible in our evaluations. We want to make sure that we work against our prejudices, assumptions and biases. But is finding a metric or statistic a recipe against those pitfalls? I don’t think so.
To use a metric to come up with a decision or evaluation, we still need to apply our own judgement to make sure that this measure is accurate and objective, and does not introduce its own bias. And then you have to apply your own judgement to make sure you are computing and applying this metric correctly.
It would be wonderful if we could determine everything by fully objective data-driven measures, but the reality is that even interpreting numbers still requires judgement.
We can see examples of this everywhere. In the social sciences, studies are often re-interpreted from a new perspective, with the same data leading to different conclusions — see, for example, the new analysis of the Stanford Marshmallow Experiment , or Gina Perry’s re-examination of the Milgram Experiment, both of which cast doubt on the original findings. Perhaps the most startling examples of the influence of judgement on interpreting numbers is what we see in politics: the same statistic is often used to support opposing stances, the conclusions highly dependent on the worldview of the person analyzing the data.
I would argue that good judgement is even more important if you’re trying to use a seemingly-objective metric. When you're using numbers, it is easy to lull yourself and your audience into a false sense of security (“look, the data proves this!”) and come to the wrong conclusion.
So yeah, as scary as it might sound for those who insist that they can be perfectly objective and data-driven, there is no substitute for good judgement. If we want to avoid bias, there is no replacement for introspection, asking for and listening to feedback, and keeping an open mind. There is no magic metric that will save us from the hard work.
Top comments (4)
The only way to see the values that individuals bring to the team is to be a part of that team. I think many times disconnected parties (for example, executives) try to substitute metrics here. Instead of trusting the leaders in that team to report the information. But without context and judgement, the metrics are not meaningful. And in fact, the whole idea of routine performance evaluations is suspect here. Why is performance being evaluated? Do we mistrust our management staff? Was there a problem? Was there a huge success? Or are we doing it because we've always done it or business magazines say we should do it? What's wrong with defaulting to paying people market value for the work they do, then dealing with exceptionally good or bad performances as they occur? You know, treating employees like humans, like I want to be treated. :)
Couldn't agree more!
You got me thinking about this question of "do we even need performance evaluations?", and I ended up writing up a new post about it:
Why Do We Have “Performance Evaluations”, Anyway?
Ana Ulin 😻
Couldn't agree more!
You got me thinking about this question of "do we even need performance evaluations?", and I ended up writing up a new post about it: dev.to/anaulin/why-do-we-have-perf...
So true, there's a lot of manipulation in data going on right now.
I've recently read an opinion that maybe one of the issues in the foundations of how we use AI now is the research for the perfect answer, trying to substitute human judgement with the machine's. Instead, the scientist proposed, to embrace fallibility and let the machine, when obviously feasible, output a few possible answers.
The article was this one: We Need an FDA For Algorithms.