DEV Community

Cover image for The problem with thought leaders

The problem with thought leaders

stereobooster on August 26, 2019

In recent years (and due to the toxicity of the Twitter) term "thought leader" became a negative thing to me. None-scientific matter So...
Collapse
 
kspeakman profile image
Kasey Speakman • Edited

I see this soooooo much. I think it is part of human nature to play with something and then shout out how great it was to play with. Even assuming no bias (e.g. financial), there are two problems at work here. One, the writer with surface-level experience has probably stayed on the happy path. They haven't gotten familiar with the edge cases... and in fact, may be blinded from seeing them in the "honeymoon" phase. Two, readers will often naively assume otherwise. I know I have.

My take: articles that do not substantially discuss trade-offs are more for first impressions or entertainment. They don't contribute to my decision-making process. They can't. In order to properly decide, I need trade-offs to weigh. And life has taught me that everything has them.

Collapse
 
wrldwzrd89 profile image
Eric Ahnell

People get so inclined to toot their own horn that they completely forget that not everyone else thinks like they do. While trying to lead by example is great, if nobody follows you, are you really leading?

Collapse
 
phlash profile image
Phil Ashby

I'm usually pretty cynical about the new KoolAid that shows up :) Lately I've been most impressed with those using empirical evidence collected from a decent number of sources, in particular the State of DevOps Report series from Nicole Forsgren et al, leading to the Accelerate book.

There are good counter arguments, believable statistical analysis and clear advice, to measure first and cut second.

At work we've been through some new toys (Kubernetes ecosystem for one) that didn't return on their early promise, what matters is that we /knew/ they weren't doing what we expected, and were able to /change tack/ and find a way that worked for us. NB: not throwing shade on K8S, just that it didn't work for our scenario, and the thought leaders (Kelsey Hightower for K8S) were pretty open about what can and does go wrong, which is reflected in aggregator reports like Thoughtworks TechRadar.

My BS detector comes on when such people are unable to answer reasonable questions at conferences on the downsides of their snake oil :)

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

Martin Fowler writes very balanced articles when analysing new technologies and architecture trends. He's a thought leader in the area of application integration.

Collapse
 
ssimontis profile image
Scott Simontis

I totally get it. Was considering a startup and while doing market research, I actually started laughing out loud at how unscientific their "evidence" was. The unbiased paper was full of things like "this unit was offline 100% of the time and never produced data so we removed it from calculations as an outlier." Seriously? That sounds like something that needs some deep investigation considering that product cost over $100,000 to install and apparently cannot even turn on.

They had one methodology that was only in use at a single location, but covered 110 blocks of the city, while most of the other methods covered 3-6 blocks. They claimed they couldn't draw any conclusions about the 110 block system because it has not been deployed widely enough to draw conclusions. I failed probstat twice, but isn't 110 data points statistically more relevant than the conclusions drawn by comparing a few instances of 3-6 data points.

At the end of the article they revealed there was no raw data analyzed. It was all a phone interview with customers of the system where, pardon my French, the customers were basically pulling numbers out of their ass on how much the technology had improved their operations. I was feeling discouraged and about to give up until I read that study. When I saw the lack of integrity and scientific rigor, I knew I might just have a chance.