Book review: “Distrust”

For the Washington Post, I reviewed Gary Smith’s take on disinformation and the replication crisis in scientific research. Here’s what I wrote:

People are often tempted to trust statistics and algorithms as neutral arbiters. But algorithms are incapable of independently understanding the worth of what they’re generating. They’re also very good at producing the appearance of meaning, which makes it that much easier to trawl through data sets in search of the conclusions you want to see in them. When a scientist uses an algorithm to look for a statistically significant relationship in a huge trove of data, they are going to find something. It might well be random nonsense. As Smith notes, “The explosion in the number of things that are measured and recorded has magnified beyond belief the number of bogus statistical relationships waiting to deceive us.”

And yet statistical significance can be enough to drive publication in a reputable journal. If the correlation is catchy enough, it’ll get a lot of media coverage: The initial conclusions of the French hydroxychloroquine study didn’t stand up as more researchers investigated the drug’s effect on covid-19. And yet, the afterlife of all that initial attention was long and consequential, as anti-science grifters and conspiracy theorists continued to push the drug as a secret cure for covid-19 that scientists were covering up.

Read more at the Washington Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.