Mark Zuckerberg and his top lieutenants at Facebook publicly claim that the tech platform applies the same rules to all of its nearly 3 billion users, whether they are ordinary people or titans of politics, entertainment, and sports. But an exposé published this week in The Wall Street Journal demonstrated that this isn’t true. Relying on internal Facebook documents, the Journal showed that the company has a separate content moderation system that shields millions of VIPs from enforcement standards that are routinely applied to rank-and-file users.
By now, this shouldn’t be a surprise. Time and again, Facebook makes claims that turn out to be less than candid. Consider its declarations about protecting user privacy, which were undercut by the Cambridge Analytica revelations in 2018. Or the company’s attempt to play down Russian manipulation of the platform to interfere in the 2016 presidential election.
This week, coincidentally with the Journal’s scoop, the Center for Business and Human Rights at New York University’s Stern School of Business published a report on social media’s role in fostering political polarization. The Center, where I serve as deputy director, decided to investigate this topic, in part, because Facebook has been outspoken in denying that it plays any such role.
“Some people say that the problem is that social networks are polarizing us, but that’s not at all clear from the evidence or research,” Zuckerberg testified before a U.S. House of Representatives subcommittee in March. He pointed to alternative culprits: “I believe that the division we see today is primarily the result of a political and media environment that drives Americans apart.”
Backing up his boss, Nick Clegg, Facebook’s vice president for global affairs and communication, subsequently argued in an article on Medium: “What evidence there is simply does not support the idea that social media, or the filter bubbles it supposedly creates, are the unambiguous driver of polarization that many assert.”
But if you read the relevant academic articles and interview the experts who wrote them, you discover that the evidence points to Facebook and other tech platforms playing an important part in the polarization story in the United States. Specifically, in a society that has become increasingly polarized, the use of social media may not create partisan divisiveness in the first instance, but it does exacerbate it.
Contrary to Facebook’s contentions, many experts have concluded that the use of social media does contribute to partisan animosity in the U.S. In an article in Science in October 2020, a group of 15 researchers from such universities as Harvard, Stanford, and NYU, concluded: “In recent years, social media companies like Facebook and Twitter have played an influential role in political discourse, intensifying political sectarianism.”
More recently in August, a separate quintet of researchers reinforced this idea based on their review of empirical evidence in an article in the journal Trends in Cognitive Sciences: “Although social media is unlikely to be the main driver of polarization, we posit that it is often a key facilitator.”
In one of the leading experiments on this question, researchers paid American subjects to stop using Facebook for a month, until just after the 2018 midterm elections. The randomized study involved 2,743 people, including a control group that continued to use Facebook. After the experiment, the researchers surveyed participants and reported their results in March 2020. While staying off Facebook didn’t significantly reduce divisiveness based on party identity, it did lessen polarization related to policy issues.
“That’s consistent with the view that people are seeing political content on social media that does tend to make them more upset, more angry at the other side [and more likely] to have stronger views on specific issues,” Matthew Gentzkow, a Stanford economist and one of the study’s co-authors, told my team at NYU’s Center for Business and Human Rights.
Understanding the true role of social media in heightening partisan hatred matters to our democracy. The kind of severe polarization that now characterizes U.S. politics has important consequences. These include the erosion of trust in elections and the unwillingness to accept scientific facts, such as the value of masking and vaccination in the face of a lethal pandemic. As illustrated by the January 6 insurrection at the Capitol—an event incited and organized on social media—extreme polarization also can lead to physical violence that threatens the very functioning of our national government.
Some of these consequences have appeared on the political left, such as when certain Black Lives Matter protests during the summer of 2020 devolved into looting, arson, and assaults on police. But far more often in recent years, it has been the political right that has driven polarization and its corrosive effects.
Addressing social media’s contribution to extreme divisiveness won’t be easy. That’s because the feature responsible for this contribution is the fundamental design of the automated systems that run the tech platforms. “Social media technology employs popularity-based algorithms that tailor content to maximize user engagement,” the co-authors of the Science article wrote.
Maximizing engagement increases partisan polarization, they added, especially within “homogeneous networks,” or groupings of like-thinking users. This is “in part because of the contagious power of content that elicits sectarian fear or indignation,” they stated.
As we noted in our report, social media companies do not seek to boost user engagement because they want to intensify polarization. They do so because the amount of time users spend on a platform liking, sharing, and retweeting is also the amount of time they spend looking at the paid advertising that makes the major platforms so lucrative. Content that elicits partisan fear or indignation is particularly contagious and helps fuel this advertising business model. In 2020, advertising provided 98% of Face- book’s $86 billion in revenue. Google, which includes YouTube, reported $182 billion in revenue, 81% of which came from advertising.
Given these enormous pecuniary interests, persuading the tech platforms to rely less on engagement won’t be easy. That’s why our report recommends federal regulation. Congress should empower the Federal Trade Commission to draft and enforce new standards for industry conduct. One requirement ought to be greater transparency about how now-secret social media algorithms rank, recommend, and remove content. This kind of disclosure would lead to more accountability, as regulators, lawmakers, and the public gain better insight into the pathologies associated with widespread use of tech platforms.