“Notwithstanding Anything Contained Herein”

A blog about financial agreements and language.

The Real Dunning-Kruger Fools

Dec. 17, 2021, 10:53 a.m.

The Dunning-Kruger effect isn’t real. We should all be more skeptical of social science, especially pop psychology favorites. The real fools of Dunning-Kruger are those who have uncritically trusted social science, or lacked the curiosity to investigate and think critically about the actual research. #academia

This post really should just be the following links, which I will leave to do the heavy lifting here, before I go off on one about Dunning-Kruger, social science, and Linkedin.

  1. It’s probably a data artifact.

  2. It really looks like a data artifact.

  3. You guys, it’s a data artifact.

Frustration with the popularity of Dunning-Kruger has been rattling around in my brain for some time, because I see references to it every week on Linkedin. Particularly guilty are “influencers” with an uncountable number of connections issuing banalities about competence and leadership. So I’m going to take out my frustration on Dunning-Kruger and its acolytes here. 

What’s so surprising about the persistence of Dunning-Kruger is that it has so many of the hallmarks of the great psychology replication crisis’s body count: it was underpowered, it lacked a plausible theoretical underpinning, and it overturned conventional wisdom.   I’ll go over these complaints in order.

Dunning-Kruger is underpowered, even though it is better than most psychology papers, with some of their experiments involving up to 140 participants.  But each participant was contributing only one number of interest (their actual performance vs. their estimated performance), so that is fundamentally only 140 observations for such an experiment.  The central limit theorem is supposed to apply after about 30 observations, but as a practical matter, the fact there is a replication crisis, and one of the major drivers is that studies are underpowered implies that the theoretical safety of the CLT is a different proposition to what is practically required.  

Dunning-Kruger has an implausible mechanism.  The authors imagined that people are bad at estimating the complexity of tasks they know nothing about.  That explanation is fool’s gold: an intuitive explanation that causes an unintuitive outcome.  And yet, it is extraordinarily flimsy.  Nobody would suppose themselves extraordinarily competent at extraordinarily complex tasks like cancer research when they have no experience, or suppose themselves extraordinarily incompetent at a simple task like putting an egg in a bucket when they have done it thousands of times in the past. So there are obvious limitations to this proposal before we even get into the details, but since it is being posited as a broad fact (even couched as a “cognitive bias”), it cannot be broadly true.

Finally, you make your hay in social science by overturning conventional wisdom.  That’s how you get attention, but the problem is that conventional wisdom “is Lindy” to borrow a Talebian predicate.  So the career-minded researcher has a conundrum: you need to overturn conventional wisdom to get attention (and thereby, grants), but overturning conventional wisdom means undermining established work and established theory.  I’m not going to get into allegations of conscious or malicious p-hacking (I personally never witnessed any during my time, but it certainly happens), but it is very easy to do, consciously or subconsciously

The first time I encountered Dunning-Kruger, my first thought was that it looked to me suspiciously calculated to make a splash, like a manufactured pop song that’s been tailored and focus-tested to be a hit.  This isn’t an allegation that it was manufactured, only that it was prima facie obvious to me (as I assumed, wrongly, it would be to anyone else) that the finding struck the perfect chord to find its way into management consulting, via lazy pop psychology.  

Now, it could have been true (although it isn’t) and yet still made waves in pop psychology.  But these things are always far less nuanced in application than they are in theory.  I have seen banal Linkedin post after banal Linkedin post referencing Dunning-Kruger as if it were applicable to everything from advanced jet propulsion design to how to find your car keys. Even within the original paper, the conclusions are more measured, where they write:

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods. Nor do we mean to imply that the metacognitive failings of the incompetent are the only reason people overestimate their abilities relative to their peers.

But this nuance never makes it into through the pop psychology filtering apparatus.

So, in conclusion, the real fools of Dunning-Kruger are those who have uncritically trusted social science, or lacked the curiosity to investigate and think critically about the actual research. I’m not asking everyone to do the statistical analysis for every paper they read, I’m just asking people to be more skeptical of social science overall.  Or maybe I just wrote this thing exclusively to complain about Dunning-Kruger. You be the judge.