clock menu more-arrow no yes mobile

Filed under:

What journalists get wrong about social science, according to 20 scientists

MadamSaffa / Shutterstock
Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

There's a constant conflict between social scientists and the reporters who cover them. It's derived from "a fundamental tension between the media's desire for novelty and the scientific method," as Sanjay Srivastava, who researches personality at the University of Oregon, tells me.

He's right. Journalists have a need for digestible headlines that convey simple, accessible, and preferably novel lessons. The scientific method stresses a slow accumulation of knowledge, nuance, and doubt. "By the time some piece of scientific knowledge is solid enough to be worth spending money and changing lives over, the idea should not be 'new' anymore," Srivastava says.

As a result, journalists can often write eye-catching stories that appeal to a popular audience but miss the bigger picture. This tension may never disappear. But in an effort to bridge the gap, I recently asked a few dozen psychologists and social scientists a simple question: "What do journalists most often get wrong when writing about research?"

I received 20 responses, and here's a summary of what I learned:

(Read the complete responses here.)

1) Journalists often want clear answers to life and social problems. Individual studies rarely deliver that.

It's common in psychology journalism to read stories like:

  • Here's how science says we can be happier.
  • Here are science-based tips for working more effectively.
  • Here's how to increase civil discourse between political foes.
  • Here's how raise well-adjusted kids.

But it's unlikely that scientists will "solve" any of these problems with a single study or piece of research. These are the sorts of big questions that take decades of work to sort out.

"I want to know the answer too!" Jason Reifler, a political scientist at the University of Exeter, writes. "But, I don't have more than a tentative answer yet. And it will probably be something I continue to work on the rest of my career."

2) Journalists need to realize that most research is extremely narrow in focus

Simple cause-and-effect relationships are easy to convey to readers. Does more money increase happiness? Does sitting all day cause depression and weight gain?

Few studies are designed to answer such broad questions.

"It's extremely rare in the study of humans to find simple, single cause/single effect relationships," says Kevin Smith, who researchers political psychology at the University of Nebraska Lincoln.

And studies aren't usually designed to show simple relationships.

"Most scientists I know are focused instead on the conditions under which something happens (e.g. money does increase happiness if you're poor)," Jay Van Bavel, a social psychologist at New York University, writes. "Adding one or two simple qualifiers like this provides much more explanatory power and allows us to get beyond silly debates."

3) Journalists are obsessed with what's new. But it's better to focus on what's old.

If a study comes out that finds something surprising, journalists tend to flock to it. But scientists tend to be more skeptical — especially if it's only the first study to come to this conclusion. One study with a statistically significant conclusion could be a fluke. That doubt decreases every time it's replicated.

"The cumulative body of research on a phenomenon is what is needed more than publishing the most recent studies," writes Joe Magee, who researches organizational behavior at NYU.

That's why meta-analyses are extremely useful. These are papers that examine and synthesize an entire body of research conducted to date on a particular question. Yet they don't get as widely circulated in the media.

"A meta-analysis will get the same — or less — attention than a flashy new study," Katie Corker, a social psychologist at Kenyon College, writes.

And be wary of outliers. If one study finds a contrary conclusion to the established research, it doesn't debunk the established research.

"If one study out of 20 finds a different result, [journalists might consider the field] 'controversial,'" writes Jean Twenge, who studies generational differences at San Diego State University. "That's not controversial. It's just variations in science, and 95% of the evidence still points one way."

4) A lot of research is conducted on college students. College students are not normal.

There are a lot of weird things about college students. They're usually from wealthier families. They are largely unemployed. They don't stick to regular schedules.

Psychologists use them in studies because they're convenient. But while they might yield findings that are worthy of consideration (and replication in more diverse samples), they don't produce universally generalizable results.

(The same goes for studies on mice. Those are often a helpful starting point in understanding how biology is linked with behavior. "But you can't generalize an animal study to humans," Michael Grandner, who studies sleep at the University of Arizona, writes.)

5) Journalists should be more careful with percentages

Let's say a paper comes to this conclusion: Among people who watch more than seven hours of television a day, there's a 50 percent greater chance they'll develop diabetes.

That doesn't mean that anyone who watches seven-plus hours a day has a 50-50 chance of developing diabetes.

"If something is 50 percent more likely than an alternative, but that alternative only happens 1 percent of the time, you now have something that happens 1.5 percent of the time — it's still rare," Grandner notes.

This mistake comes up a lot in science reporting, so it's something to watch out for.

6) There's a difference between real-world significance and statistical significance

Another cautionary note: When a researcher says results are statistically significant, it means the results are not attributable to random chance. It doesn't mean the results are robust.

"A group of people with an average height of 70 inches and another that is 71 inches may be statistically significantly different, but the difference isn't really that meaningful," Grandner points out.

7) Scientists love using technical terms. Journalists are often too loose in translating them.

Psychologists can use highly specific terms in their research. As a service to their readers, reporters try to phrase the jargon in everyday terms. This is essential for easy reading, but a lot of important nuance can be lost that way.

Betsy Levy Paluck, a social psychologist at Princeton, writes that her research on "highly connected kids" in social networks is often described as work on "cool kids." But that's misleading. "Some of these highly-connected kids are really not cool," she writes.

8) Journalists should also be skeptical of studies based on self-reporting

What people say and what they do are often two completely different things.

"Many journalists take for granted that self-reported behaviors are the same as actual behaviors," Levy Paluck writes. "The studies that capture actual behaviors (spending, test scores, observed interactions, and the like) are much rarer than those that simply ask people to report their behaviors."

9) Unpublished research can be useful. But it should be treated more skeptically.

Scientists will put working drafts of papers on personal websites. University press departments will promote academic conference presentations as they would a newly published paper.

If it "hasn’t yet been peer-reviewed (or judged acceptable for publication by other academics working in the field) that means there may be additional problems with the work, or maybe it hasn’t been replicated yet," Ingrid Haas, who researches political psych at the University of Nebraska Lincoln, writes. "In a nutshell, the work might be less reliable."

10) Always direct readers back to the original research

W. Keith Campbell, who researches narcissism at the University of Georgia, urges all reporters to link to the full text of research reports.

"I had a recent paper that was used as material for a clickbait story (something like people see Star Wars to meet their narcissistic needs)," Campbell writes. "Many readers were annoyed. ... And then people started reading the actual article. This [led] to what I thought was a very interesting online discussion."

11) And finally: Correlation is not causation

This came up time and time again. Write this one on your mirror and make it a daily affirmation.

"If people in group A are more likely to eat carrots, eating carrots may or may not cause you to become a member of group A," Grandner writes. "It just happens more often. It's the next study that will have to study cause and effect."

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.