For a brief manic moment in October 2016, a significant corner of the internet fell victim to a fantastic hoax: that Pope Francis had endorsed Donald Trump for U.S. president. That headline ricocheted around the world in the form of tweets, Facebook updates, and astounded emails.
The news, of course, wasn’t true.
If 2016 was the year in which so-called “fake news” helped propel Trump into the White House, 2017 was the year in which social scientists — along with FBI and congressional investigators probing Russia’s election-year campaign to boost the real estate mogul’s electoral chances — began to reckon with its impacts.
Philip N. Howard of Oxford University and Alice Marwick of the University of North Carolina at Chapel Hill are two such academics working at the cutting edge of a movement dedicated to understanding how information — and especially misinformation — makes its way around the internet and into the minds of people around the world.
Howard heads the Computational Propaganda Project at Oxford, an interdisciplinary research group that combines the methods of computer science, political science, and sociology to examine how the internet and social media can be used to manipulate public opinion. His project has brought scientific rigor to bear on the phenomenon of fake news by systematically cataloging and analyzing how bots and automation are influencing public opinion.
Trained as a sociologist, Howard got his first taste of the fake news phenomenon while living in Budapest in 2012, when allies of Hungarian Prime Minister Viktor Orban used the power of innuendo and rumor to cast aspersions on the country’s Roma population. When in 2014, Russian-backed separatists shot down Malaysia Airlines Flight 17 over eastern Ukraine, Howard’s Hungarian friends began passing along the many conspiracy theories promoted by the Kremlin to explain the disaster — among them, for example, that the U.S. military had downed the packed airliner.
With this development, Howard’s long-standing research interests suddenly intersected with the world of online propaganda. “I’m interested in studying how political elites manipulate people — and how they do that over the internet,” Howard says.
Howard’s work has provided some of the first hard data on how fake news and propaganda target internet users. By building a huge database of election-related tweets, Howard and his researchers have, for example, showed how misinformation being shared on Twitter during the 2016 U.S. election was concentrated in swing states. Another study of more than 20,000 Facebook and Twitter users found that Russian propaganda during the 2016 election targeted veterans of the U.S. military with conspiracy theories and misinformation. In a study of 28 countries, Howard found that “every authoritarian regime has social media campaigns targeting their own populations, while only a few of them target foreign publics.”
While Howard has built huge datasets to examine the fake news phenomenon, Marwick is one of a growing number of anthropologists investigating the internet’s thriving subcultures, such as 4chan and 8chan, two anarchic online message boards that have served as organizing tools for the alt-right and springboards for fake news hoaxes. The lead author of a foundational report on online fake news, completed while she was a fellow at the Data & Society Research Institute, Marwick brings the tools of academic ethnography to these online communities, which have an outsized influence on the online conversation but remain misunderstood and fairly obscure.
Understanding these online communities with the precision of an anthropologist presents an opportunity to also understand how misinformation flourishes in the internet era. Amid the high drama of the 2016 election, 4chan mobilized as a kind of online strike force for the Trump campaign. Beginning in October 2016, users of this online message board organized to comb through emails belonging to Hillary Clinton operative John Podesta and likely leaked by Russian agents. 4chan users played a key role in popularizing the so-called “Pizzagate” conspiracy theory, which held that Democratic officials were involved in a child sex ring headquartered at a Washington, D.C., pizzeria.
That conspiracy theory gained a measure of closure when one of its adherents walked into the pizzeria armed with an assault rifle to “self-investigate” the issue. After walking around the restaurant, Edgar Maddison Welch found no evidence of a sex ring, to the dismay of 4chan’s many promoters of the theory. Yet the episode presents a disquieting example of how an online fever dream can have real-world impacts.
“The same type of processes in social media that allowed for the Arab Spring uprising or Wikipedia were now being used by a completely different group of people to spread rumors and particular agendas that were coming from a position that was fundamentally anti-democratic,” Marwick says.
Governments are beginning to comprehend and take advantage of these dynamics. Understanding this flow of online information is key to preserving democratic ideals of a well-informed citizenry. “There is a foreign interest in creating confusion through campaigns of manipulation,” Howard said in November. “Instead of censorship, we must find more ways to produce good, free political speech.”
Scholars of the internet tend to speak of the internet’s power as a tool for “good faith coordination” — organization by a social movement, for example. Marwick and Howard are extending that concept into the idea of “bad faith coordination,” says Ethan Zuckerman, the director of the Center for Civic Media at the Massachusetts Institute for Technology.
“We are beyond the moment where studying the internet is a technical challenge,” Zuckerman adds. “It’s now a social science challenge.”
Elias Groll is a staff writer at Foreign Policy.
Howard’s team studied 22,117,221 tweets in examining misinformation on Twitter in American swing states during the 2016 election.