Today let's talk about some of the most rigorous research we've seen to date on the topic of social networks' influence on politics — and the predictably intense debate surrounding how to interpret them.
I.
Even before 2021, when Francis Huygen rocked the company by releasing thousands of documents detailing its internal research and discussions, Meta faced repeated calls to cooperate with academics on the social sciences. I have argued that it is ultimately in the company's interest to do so, since the absence of good research on social networks has led to strong beliefs around the world that social networks are harmful to democracy. If that's not true — as Meta insists it isn't — the company's best course of action is to enable independent research on that question.
The company long ago agreed to do that, in principle. But it has been a rocky road. The Cambridge Analytica data privacy scandal of 2018, which stemmed from an academic research partnership, has made Meta understandably anxious about sharing data with social scientists. A later project with a nonprofit called Social Science One went nowhere, as Meta took so long to produce data that its biggest supporters quit before producing anything of note. (It was later revealed that Meta had accidentally provided the researchers with bad data, effectively ruining the research in progress.)
Three papers sought to understand how the Facebook news feed affected users’ experiences and beliefs
In one experiment, researchers prevented Facebook users from seeing any "shared" posts; in another, they displayed Instagram and Facebook feeds to users in reverse chronological order, instead of in the order of the Meta curation algorithm. Both studies were published in Science. In a third study published in Nature , the team reduced by one-third the number of posts Facebook users saw from "like-minded" sources -- people who share their political leanings.
In each of the experiments, these tweaks changed the kind of content users saw: Removing reshared posts meant people saw far less political news and less news from untrusted sources, for example, but more uncivil content. Replacing the algorithm with a chronological feed resulted in people seeing more untrustworthy content (because the Meta algorithm downgrades sources that repeatedly share misinformation), although hateful and intolerant content was almost halved. Users in the experiments also ended up spending much less time on the platforms than other users, suggesting that they became less persuasive.
By themselves, the findings fail to confirm the arguments of Meta's worst critics, who believe that the company's products have played a leading role in polarizing the United States, threatening democracy. But they also don't suggest that changing the feed in the way that some lawmakers have called for — making it chronological instead of posts ranked according to other cues — will have a positive impact.
"Surveys during and after the experiments show that these differences do not translate into measurable effects on users' attitudes," Kupferschmidt writes. "Participants did not differ from other users in how polarized their views were on issues such as tourist immigration, Covid-19 restrictions or racial discrimination, for example, or in their knowledge of elections, their trust in the media and political institutions, or their trust in the legitimacy of elections. They too were more or less likely to vote in the 2020 elections.”
II.
Against this somewhat confusing background, it is not surprising that a battle has erupted around what conclusions we should draw from the study.
Meta, for his part, suggested that the findings show that social networks have only a limited impact on politics.
"Although questions about the impact of social media on key political attitudes, beliefs, and behaviors are not fully resolved, the empirical findings add to a growing body of research showing that key features of Meta's platforms alone have detrimental 'effective' polarization or meaningful effects on these outcomes," Nick Clegg, the company's president of global affairs, wrote in a blog post: "They also now challenge the common claim that the ability to re-share content on social media drives polarization."
But behind the scenes, as Jeff Horwitz reports in The Wall Street Journal, meta and social scientists are fighting over whether that's true.
Horwitz writes:
The academic leaders, New York University professor Joshua Tucker and University of Texas at Austin professor Talia Stroud, said that while studies show that simple algorithm changes do not make test subjects less polarized, the paper had caveats and possible explanations. Why such limited changes haven't changed users' overall view of politics in the final months of the 2020 election.
"The findings of these papers do not support all of those assertions," Stroud said. Clegg's comment "is not a statement we would make."
Science headlined its package on the study "Wired to Split," which led to this wonderful detail from Horwitz: "Representatives of the publication said that meta and outside researchers had asked to add a question mark to the title to reflect the uncertainty, but that the publication did not understand its research. deems the representation fair."
Megan Phelan, who worked on Package for Science, wrote to Meta earlier this week that the journal's findings do not incriminate the social network, Horowitz reported. "Research findings suggest that meta-algorithms are an important part of what keeps people divided," she wrote.
What to make of all this?
While researchers struggle to draw definitive conclusions, some things seem clear.