We learned from whistleblower Frances Haugen and internal Facebook documents that the algorithm decides which posts of your selected friends and groups are pushed to you ahead of others.
And the profit-maximizing algorithm knows that the way to maximize and hold your attention so it can serve you a targeted high price ad is to send you provocative incendiary content. The company programs, maintains, and tweaks the algorithm so the company is responsible for its actions.
“Facebook’s own research shows that content that’s hateful, that is divisive, drives engagement. It’s easier to inspire people to anger than other emotions,” Haugen told lawmakers.
If a TV outlet chooses to broadcast a viewer’s comment or a newspaper publishes a letter to the editor, they can be held financially responsible if with reasonable effort they could have known that the content was false and defamatory. The person who made the comment can also be held liable, but usually doesn’t have deep pockets and isn’t making money out of the broadcast or publication.
But if the same person posts the same comment on Facebook, and the company chooses (via algorithm) to amplify it to millions of users, then the person can be held liable for false and defamatory libel but not Facebook. Why? Because a law Congress passed back in 1996 to shield internet service providers from liability for whatever one user might say to another over the new internet infrastructure is now being applied to companies like Facebook that ride over that internet infrastructure.
Facebook and Instagram and other social networks describe themselves as neutral platforms that simply connect users together, like a bulletin board. And if they really were a bulletin board, where users could choose which posts they wanted to see, then they shouldn’t be responsible for what users say. But if they actively choose which posts go on your bulletin board and a different set for mine, and which posts to send to millions of people so they can make money selling ads, then they are a publisher and should be held responsible if with reasonable efforts they could have determined this content to be false, malicious or defamatory.
Various bills have been proposed in Congress to revise or repeal 230. Some of the proposed solutions remove the liability shield if the content falls into a certain category – promoting terrorism or sex trafficking, violating civil rights – or if it is thought to harm certain users, like users under 18. Those solutions require judgments about the nature of the posts and may violate first amendment rights.
First Amendment laws protecting speech, particularly political speech, have coexisted with libel and defamation liability for two centuries. As new technologies evolved, from pamphlets to newspapers to telephones, radio, and television the courts have developed reasonable standards for defamation and libel depending on the context. There is no reason we can’t evolve a sensible balance again.
Section 230 should be revised so that powerful profit-maximizing social networks bear responsibility for the content that they choose to amplify, whether by human decision or algorithmic decision programmed by humans. If the content they selectively amplify and deliver to particular profiled users to drive profitability is false and defamatory, they should not be shielded from liability.
Jeff Bewkes served as the CEO of Time Warner from 2008 to 2018, where he oversaw companies like CNN, HBO, Warner Bros., Time Inc. magazine, and at one time AOL. Before leading Time Warner, Jeff served as chairman of Time Warner’s entertainment and networks group, and prior to that, he was the CEO of HBO where he was responsible for overseeing the companies move to produce original content.