DEV Community

Cover image for The problem with thought leaders
stereobooster
stereobooster

Posted on

The problem with thought leaders

In recent years (and due to the toxicity of the Twitter) term "thought leader" became a negative thing to me.

None-scientific matter

Some none-scientific subjects introduced as if they are proven to work. For example, clean code, TDD, agile, type systems. Are there empirical evidence that those things work? I haven't seen any which proves a definite positive effect without caveats.

I don't say that those things are bad I'm just saying that we don't know for sure if they are good or how good they are.

Out of all "best" practices out there we have some kind of evidence of positive impact for:

  • code reviews
  • good sleep

¯\_(ツ)_/¯

Please don't trust me - read papers yourself.

Biased

Another problem when technology is presented as if it is a silver bullet.

If a person doesn't present downsides of the solution it means:

  • either they don't have enough experience and haven't seen edge cases yet
  • or they know downsides but present you malformed information, because they try to "sell" you this information

For example, I like that Dan Abramov, inventor of Redux, himself wrote an article "You Might Not Need Redux".

Solution?

Is there a solution to this problem?

  • Maybe CoC help will help? I guess not.
  • Maybe we need "Hippocratic Oath" for scientists? Like "I will act in the interest of science..." and every thought leader should have a badge if they gave oath or not.
  • Maybe we need to improve education, so people would be able to "call bull" themselves?

PS

I hope my articles don't have this overconfident tone. What are examples of good thought leaders? Who do you follow and feel like you could learn from?

Photo by Juan Rumimpunu on Unsplash

Top comments (5)

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

I see this soooooo much. I think it is part of human nature to play with something and then shout out how great it was to play with. Even assuming no bias (e.g. financial), there are two problems at work here. One, the writer with surface-level experience has probably stayed on the happy path. They haven't gotten familiar with the edge cases... and in fact, may be blinded from seeing them in the "honeymoon" phase. Two, readers will often naively assume otherwise. I know I have.

My take: articles that do not substantially discuss trade-offs are more for first impressions or entertainment. They don't contribute to my decision-making process. They can't. In order to properly decide, I need trade-offs to weigh. And life has taught me that everything has them.

Collapse
 
wrldwzrd89 profile image
Eric Ahnell

People get so inclined to toot their own horn that they completely forget that not everyone else thinks like they do. While trying to lead by example is great, if nobody follows you, are you really leading?

Collapse
 
phlash profile image
Phil Ashby

I'm usually pretty cynical about the new KoolAid that shows up :) Lately I've been most impressed with those using empirical evidence collected from a decent number of sources, in particular the State of DevOps Report series from Nicole Forsgren et al, leading to the Accelerate book.

There are good counter arguments, believable statistical analysis and clear advice, to measure first and cut second.

At work we've been through some new toys (Kubernetes ecosystem for one) that didn't return on their early promise, what matters is that we /knew/ they weren't doing what we expected, and were able to /change tack/ and find a way that worked for us. NB: not throwing shade on K8S, just that it didn't work for our scenario, and the thought leaders (Kelsey Hightower for K8S) were pretty open about what can and does go wrong, which is reflected in aggregator reports like Thoughtworks TechRadar.

My BS detector comes on when such people are unable to answer reasonable questions at conferences on the downsides of their snake oil :)

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

Martin Fowler writes very balanced articles when analysing new technologies and architecture trends. He's a thought leader in the area of application integration.

Collapse
 
ssimontis profile image
Scott Simontis

I totally get it. Was considering a startup and while doing market research, I actually started laughing out loud at how unscientific their "evidence" was. The unbiased paper was full of things like "this unit was offline 100% of the time and never produced data so we removed it from calculations as an outlier." Seriously? That sounds like something that needs some deep investigation considering that product cost over $100,000 to install and apparently cannot even turn on.

They had one methodology that was only in use at a single location, but covered 110 blocks of the city, while most of the other methods covered 3-6 blocks. They claimed they couldn't draw any conclusions about the 110 block system because it has not been deployed widely enough to draw conclusions. I failed probstat twice, but isn't 110 data points statistically more relevant than the conclusions drawn by comparing a few instances of 3-6 data points.

At the end of the article they revealed there was no raw data analyzed. It was all a phone interview with customers of the system where, pardon my French, the customers were basically pulling numbers out of their ass on how much the technology had improved their operations. I was feeling discouraged and about to give up until I read that study. When I saw the lack of integrity and scientific rigor, I knew I might just have a chance.