On my office bulletin board, I keep a yellowing clipping of my favorite newspaper correction of all time. It reads:

The cover article on Aug. 30, about a trip around Lake Michigan, misstated the time period during which the lake was formed and the type of activity that caused it. And a correction in this space on Sept. 6 gave another incorrect figure for the time period and repeated the erroneous cause. The lake began to form about 15,000 years ago—not 20 billion years as the article noted, or 1.2 billion as the correction noted. And the formation stemmed from glacial activity, not from tectonic activity.

I consider myself something of a corrections connoisseur, and this was one for the ages: not just a correction, but a correction of a correction, a kind of double rainbow in the annals of journalistic malpractice. I enjoy these little nuggets of reportorial atonement not because I take pleasure in the stumbles of my fellow journalists—we all make mistakes—but because they are windows into the culture of a particular media outlet. What does it take for granted? What are its blind spots?

What this correction, appended to an article in the New York Times travel section in 2015, revealed was a depressingly widespread reality: Journalists, as a whole, stink at science.

I’m not blaming the author of the piece, exactly—or not him alone, anyway. But I am blaming the institution for a profound lack of curiosity concerning basic scientific facts. Contrary to popular imagination, newspapers don’t generally have fact-checkers, per se, but reporters and editors are expected to confirm any facts that appear suspect. This manuscript must have passed though the hands of at least a half dozen staffers, each of whom surely attended one of our nation’s finest colleges. You’d think one or two of those editors might have stopped to think, “Hmm, Lake Michigan is 20 billion years old? That sounds like a really long time ago. Maybe I should Google that.” Apparently, no one did. (And, once complaints started coming in, the lowly staffer told to correct the mistake wound up substituting an only slightly less absurd error.)

Twenty billion years isn’t just a really long time ago. It’s a time older than Earth itself. It’s older than the whole lousy universe! In fact, if you accept Stephen Hawking’s theory that time effectively began with the Big Bang, it’s older than time itself. I don’t think someone writing travel articles necessarily needs to know that our planet is 4.4 billion years old, or that the Big Bang happened nearly 14 billion years ago. (Although I’d like to think any educated person asked those questions would be able to get in the right ballpark, much the way I’d hope he could guess in which century the Civil War occurred.) But what’s revealing here is the total lack of curiosity about a core scientific fact. If the author had written that the Chicago Cubs are “frequent World Series winners,” fact-check klaxons would have sounded at every editor’s desk. But make a claim about some scientific fact—even if the number cited is off by a factor of, oh, a million or so—and it will sail quietly through the editing process immune to curiosity or skepticism.

I don’t mean to pick on the New York Times (OK, maybe a little). But if one of the country’s best-funded and most respected news outlets is making egregious errors like this, imagine what’s going on in the rest of the media. And, as the former editor of a mainstream science magazine, I should note that great science journalism does exist. Mary Roach, Ainissa Ramirez, David Quammen, and others write lucidly about science for the general reader. In fact, with veteran reporters including Dennis Overbye and Carl Zimmer, the New York Times itself often produces sterling science coverage.

But in the journalistic landscape at large, accurate coverage of scientific topics is the exception rather than the rule. This is especially true when, as in the case of the Times’ Lake Michigan story, scientific claims surface in articles that don’t focus primarily on science, or when science stories are written by reporters who don’t generally cover those topics. Most reporters have only a sketchy idea of how science actually works. As a result, they’re often not aware when they are repeating misinformation or falling for hype. In fact, journalists and scientists look at the world in fundamentally different ways. Journalists are always looking for compelling narratives, often ones in which sympathetic people fight against some injustice. Science, on the other hand, attempts to establish facts that can be tested in the real world, and it seeks to minimize the distorting effects of passions and personalities. This disconnect leads journalists into several common errors.

For one, reporters have a fatal weakness for emotional anecdotes. Think of every article you’ve ever read about a supposed “cancer cluster” caused by some mysterious toxin. Such pieces typically begin with a portrait of a cancer victim, or perhaps a widow, who is convinced that every case of cancer in her neighborhood must have been caused by a pollutant in the air or water. Anecdotes about victims naturally engage our sympathies, but they tell us nothing about whether these cases were caused by some particular pesticide or toxic waste. Cancer is, sadly, extremely common. And virtually all cases of these “cancer clusters” ultimately turn out to be statistical illusions.

When journalists present individual anecdotes as evidence for broad scientific claims, they are violating a core axiom of statistics—the principle that correlation does not imply causation. Homegrown tomatoes and sunburns are both more common in the summertime. That doesn’t mean eating garden tomatoes causes sunburns. A key aim of the scientific method is to sift meaningful correlations from spurious ones. But a determined reporter can always find examples that seem to “prove” some sort of causation: Look at this avid tomato gardener who got a terrible sunburn. Case closed! This kind of anecdote-driven reporting is a contributing factor in many kinds of scientific disinformation, including vaccine denial, nutritional flimflams, and specious health claims.

Scientists also go to great lengths to avoid confirmation bias. Researchers who cherry-pick data to prop up their assumptions—even inadvertently—risk being discredited. But among journalists, cherry-picking examples to support a predetermined conclusion is often the norm. Of course, this selective approach to facts is compounded by political and cultural biases. These biases influence not just how stories are written, but also which stories are covered.

Look at the science headlines that are most likely to break through to the general public. More often than not, they support narratives that most journalists already agree with: that climate change is behind all extreme weather, for example, or that genetically modified foods might be dangerous, or that most major institutions are deeply racist and sexist. In their eagerness to cover news that conforms to their biases, journalists are often quick to publicize sketchy, preliminary, or clearly dubious scientific studies. For example, a 2012 study concluding that genetically modified corn could have “severe toxic effects” made headlines around the world. Some countries suspended imports of GMO crops. When the deeply flawed study was withdrawn a year later, the retraction got relatively little notice. Similarly, studies claiming to prove widespread “implicit” racial bias initially received glowing coverage. Those studies, and tests based on them, have since been largely discredited. Nonetheless, training programs based on the implicit-bias theory continue to flourish in businesses, government agencies, and other institutions.


TODAY, we need accurate science coverage more than ever. The COVID-19 pandemic has brought out both the best and worst in science journalism. At its best, articles by writers including Ed Yong and Zeynep Tufekci—both writing in the Atlantic—have helped keep the public informed (often ahead of U.S. health agencies, in fact). But mainstream coverage of COVID issues is too often distorted by politics. The media largely ignored questions about coronavirus risks involved in the huge Black Lives Matter protests, for example. But an absurdly flimsy paper claiming that the Sturgis Motorcycle Rally caused more than 260,000 new cases received national attention.

In a September 28 story, the New York Times breathlessly reported that the White House had pressured the CDC to “play down the risk of sending children back to school.” The piece described White House officials searching for “alternate data” showing that the pandemic “posed little danger to children.” No one wants to see unqualified officials overriding the judgment of epidemiologists. But in this case, the fact that COVID-19 poses relatively little risk to children isn’t some Trumpian myth; it’s a growing consensus among health professionals. To bolster its case, the Times cited a study from South Korea claiming that older children “can spread the virus as least as much as adults do.” In reality, those scary claims had been debunked by other experts weeks before. Most research shows that children play only a very small role in spreading the coronavirus.

On Twitter, Zeynep Tufekci challenged the Times to correct the story, saying it featured “really worrying claims unsupported by science.” So how did such a sloppy piece get published in the first place? Because the narrative was too perfect: Trump’s science-denying henchmen try to muscle CDC scientists into supporting their reckless agenda. Never mind that the reality was something closer to the opposite: Out of a distorted sense of caution (what I called the “precautionary paradox” in last month’s column), the agency was dragging its feet on providing a balanced review of the risks facing children. CDC officials even resisted a White House request to break out childhood COVID risks by age groups and instead insisted on including all cases of people under the age of 25 in a single group. The Times described bundling data in this fashion as “normal.” Of course, it is hardly normal not to provide data on how a disease affects different age groups, especially when that was the precise question under debate.

A newspaper less devoted to delivering a certain narrative—and more focused on facts—would have framed the whole article differently: The White House and the CDC disagree on childhood COVID risks. Who’s right? The main thrust of the piece would have been presenting the evidence on both sides and giving readers a responsible overview of the science. Instead, the Times produced a story that focused on a political critique first and then offered a few mangled bits of scientific information to support it. Informing the public about whether or not it is safe for children to return to school was not on the agenda.

And that’s a shame, because complex issues such as how to reopen schools are not purely scientific. They are also political and social questions. How do we balance the dangers of COVID in schools against the very real risks of children being socially isolated and falling behind in their learning? There is nothing wrong with politicians weighing in on such a question, and the public needs to be part of the conversation as well. But those discussions need to be based on solid science, which in turn requires a media that honestly and accurately convey the latest research.

COVID has taught us, if we didn’t know already, that science matters. If only our journalistic elite agreed.

We want to hear your thoughts about this article. Click here to send a letter to the editor.

+ A A -
You may also like
Share via
Copy link