The swirl of reports recording an uptick in fake news in the weeks preceding the midterm elections doesn’t surprise Gita Johar. Instead, as the Meyer Feldberg Professor of Business at Columbia Business School, she can tell you it was entirely predictable.

“The problem of amplification of this information is not going away,” she says of the social media tools widely deployed to spread misinformation. “This is something that has multiplied and taken off and that’s not gone away this election because it’s still the same thing—whether it was bots or Russian disinformation campaigns in 2016 or whoever was behind it—now what’s happened is it’s almost part of the fabric because of the political divide.”

A trio of research papers spearheaded by Johar sheds light not only on why some people are more apt to share factually inaccurate news, but also who is most likely to circulate false reports. These insights may prove especially relevant at a time of increased political scrutiny of social media platforms. They also provide perspective as new dynamics emerge with Tesla’s CEO Elon Musk completing his acquisition of Twitter, with prior comments describing an intent to reduce limits on what users can post.

Why People Accept Fake News

The first study Johar focused on, “Perceived social presence reduces fact-checking,” published in 2017, examined the central question: When are people spurred to fact-check? According to the findings, people who believe they’re in the presence of others, such as those in an online community like Facebook, tend to be less vigilant and fact-check less—even when faced with questionable information.

“You really need to raise people’s guard and somehow make them more skeptical so that they start fact-checking the news,” Johar says.

To investigate what prompts individuals to fact-check, Johar and her collaborators presented research subjects with information that was designed to be ambiguous. They then asked a series of questions: Is this information true or false? Do you want to fact-check it? Broadly, the results showed people still don’t want to make the effort, especially when they are in the presence of others.

This led Johar to suggest one straightforward tool that could be effective at curbing misinformation: building awareness. Her study showed that increasing vigilance online can increase the likelihood of fact-checking information. “You need to get them to be more vigilant,” Johar says. She sees the potential for benefit from online messages reminding readers to remain alert to misinformation.

In analyzing the reluctance to fact-check, she concluded, “It’s hard to fact-check every piece of ambiguous information you come across, because there are very few fact-checking organizations.” She notes the benefit of ongoing efforts underway by Snopes and Politifact along with the nonprofit media institute, Poynter, but they are insufficient. “This is a huge problem because fake news is not just a US problem—and around the globe there aren’t enough fact-checking organizations.”

How We Can Curb Fake News

This inherent friction suggests some possible interventions to Johar. In current work, Johar is exploring the use of crowd-sourcing to amplify fact-checking efforts. Together with a Columbia Business School PhD student Yu Ding (now an assistant professor at Stanford Graduate School of Business), she identified a way to pose questions to people that makes them more inclined to fact-check, regardless of their political ideology or prior beliefs. In the research, individuals were asked about two hot-button topics: climate change and Covid-19 vaccination. When they were asked how similar one news article was to another, the individuals were able to form judgments in an unbiased way. But when the study subjects were asked if an article was true or false, they tended to rely on their prior beliefs, Johar says.

There’s a straightforward path to take action based on this observation. Making crowd-sourced fact-checking tools easily available could have a broad impact, as long as they include similarity-based questions rather than just veracity questions that are biased by prior beliefs. She cited the Wikipedia model as an example. “We also find that if you involve citizens—regular people—in fact-checking, they trust the fact-checking system more,” she explains. “You need everyone to trust the fact-check, and we find that inviting people to the fact-checking party makes them feel trust in the system.”

Identifying Who Shares Fake News

Johar sees other potential remedies based on understanding who is most likely to share fake news. Her work looking at Covid-19 news revealed people who were more willing to share news (true or false) reported that they felt marginalized. They either identified themselves at the bottom of the socioeconomic ladder or said they experienced instances of being discriminated against.

“People who are socially marginalized are particularly likely to be looking for a kind of meaning—and just the act of sharing helps them do that,” she says.

In a working paper, Johar and her co-authors have text-mined Twitter feeds of users that she and her collaborators have identified as having shared fake news. About 10 percent of Twitter users are the dominant fake-news sharers, she says.

“We find that people who share fake news tend to be on average more anxious and angry than a regular Twitter user,” she describes. By studying these characteristics, the research team has built a predictive model that helps anticipate the likelihood a person will be a fake news sharer.

“We know that anxiety, for example, is something that motivates fake news sharing,” she says. “What you can do is in times like elections when people are sharing a lot of fake news, maybe bringing down anxiety levels might reduce the sharing of fake news.”

But while reducing anxiety nationally and internationally is a worthy if ambitious goal, Johar sees social media platforms have a range of options in their hands already.

“They have access to so much data—much more than we did in building our model—and they need to prioritize people and/or their posts for fact-checking,” she said. “They don’t have to use our model, but they need not to just throw up their hands.”

While platforms cannot fact-check every piece of news that is posted online, they could fact-check posts from those flagged as having a high likelihood to share fake news. Checks could be conducted in real-time against fact-checking sites, using people or AI systems. If the post is flagged as being false, it can be sent back to the user for editing along with the fact-check. If found to be true, it can be posted right away, without much of a lag, she said.

Another idea is admittedly more controversial: Johar would ideally like to see policy changes where social media platforms would answer to the Federal Communications Commission (FCC) and ensure there are systems in place to guard against misinformation.

“Social media and the internet evolved organically without any pause to put any of those policies in place,” she said. “I know that they run up against issues of the First Amendment and other things, but what you need is a proper regulatory framework.”