Recent decisions by social media platforms to weigh in on political disputes by hiding or warning of ‘misinformation’ have refocussed the spotlight on their role in public discourse, and rightly so. QAnon sites have been driven out of the mainstream, Facebook has banned militia groups, and the President’s Tweets have been limited. Some applaud the moves while others are outraged. Two weeks ago, Jack Dorsey of Twitter, Mark Zuckerberg of Facebook, and Sundar Pichai of Google testified before Congress to help us decide what to do. We watch anxiously to see what will become of this. In the meantime, we need a workable means to reign in online misbehavior.
It happens too often: An ex-boyfriend posts a bunch of nasty comments on Facebook. The police can do nothing without an ‘Imminent Threat’ and the courts decree a Temporary Restraining Order (TRO) – but the ‘threats’ appear minor, and no-one enforces it. The ex- becomes increasingly nasty and finally decides to ignore the TRO. We see another preventable assault…or worse.
This is an extreme example of ‘Online Harassment‘ using social media or the Internet for nefarious means. The harassment ranges from trolling to revenge porn to cyberstalking. There are, rightfully, laws on the books for severe offenses, but what about the minor, but still startlingly wrong instances?
The mood on ‘social media’ – arguably a problem from its advent – is getting worse. There is “doxing:” Twitter users identifying people in viral videos; anonymous gainsaying, where random people attack other commentaries, and even some borderline threats are now seen as acceptable. I’ve heard a number of female personalities say that they’ve had rape threats but just ignore them. They say they can do nothing about it.
Internet platforms like YouTube are rife with channels that entertain conspiracy theories. If you search for ‘JFK,’ ‘Princess Diana,’ or even ‘Coronavirus,” you get a long list of deceptive videos published by these “theorists.” Once you’re drawn in, you’re almost convinced that the moon landing was faked, and the earth is flat.
The lack of redress options leads to some hilariously desperate attempts to call out – even frame – muckraking extremists.
The result is a significant number of people are quietly brainwashed. As a doctor, I once had a patient whose father died of cancer. The man had numerous encounters with our RCMP. He asked me if the cops might have given his dad cancer. I was shocked, but even more – alarmed that he could believe that.
The atmosphere created by the preponderance of conspiracy theories makes us doubt everything. Smoke does not always imply fire, but conspiracy theorists latch on to factually incidental ambiguities as though they are proof of something underhanded.
If ‘Twitter Wars’ are allowed to continue and escalate, and conspiracy theories are allowed to flourish, that’s when we see petty rivals kill each other, militants shoot up a Pizza Parlour, or people think that the police killed their father.
Experts suggest several options to deal with the problem:
First, we could more tightly legislate in Congress. Calls for the repeal of Section 230 of the Communications Decency Act are vocal. Proponents argue that social media platforms should be stripped of protections from liability if they decide to act as true media – editorializing and choosing which posts are acceptable. Other legislation could be drafted that limits what platforms can do with the information shared on their sites. These options are what Senate leaders are considering.
I am not a believer in the government passing laws to limit speech. The solution to address that you disagree with is, I believe, to counter it with more speech, not less.
Another option is to urge social media platforms to self-regulate. They could scrutinize posts on their sites more tightly or even ban users. Personally, I like that plan less. Comedian Sacha Baron Cohen made an eloquent plea for social media to act, eventually leading celebrities like Kim Kardashian and Mark Ruffalo to temporarily boycott Instagram to hammer the point home.
I think it was misguided. Censorship of any kind is a mistake – even if it’s just categorizing some opinions as “misleading.” Twitter initiatives to block, mute, and otherwise limit unwanted or nasty comments do nothing to affect attitudes and, in my opinion, strengthen echo-chambers by preventing the challenge of established views.
Businesses make decisions based on profit, and expecting them to police even the ‘Twitterverse’ is unrealistic. They will always have their bottom-line in mind, not the good of society. I’m okay with that. It’s not their role. I would much rather a judge rule on an ex-boyfriend’s threats than Mark Zuckerberg.
The last approach is to do nothing and keep going the way we’re going. While I have issues with the above-listed options, I know that this cannot happen either. Nasty tweets and outright lies hurt the public discourse and build more and more outrage in society in general. Pervasive conspiracy theories make people uneasy to the point that they trust nothing that they read. Questioning what you’re told is acceptable, doubting it is not.
I propose to shift legal responsibility onto users. People who misuse these platforms should be more easily sued. I think libel laws should be loosened. Law requires that targets of libel must prove harm – and that ‘public figures’ are fair game – showing the current system to be inadequate. There should be a legal means to shut up the trolls and confront bigotry in all forms.
There are signs that this ‘tort’ approach could work. Recently, the father of a 6-year-old murdered at Sandy Hook successfully sued a conspiracy theorist who claimed the shootings were a hoax. A jury thought they had to do something to stop the lies – demonstrating that society is sick of allowing these views to run rampant.
Only one ‘Term of Service Agreement’ is truly needed in my view: You are responsible for anything you post. That’s it. Let’s actualize that by exposing users to liability. Over time, people who openly use the Internet and Social Media for malign reasons will gradually stop. Not because they’re forced to or were told to, but because they will choose to.
You don’t even have to win a judgment in a suit to shut up a cyberbully. Once you’re process-served a couple of times for impersonating a celebrity, cat-fishing, or attacking somebody with a fake account, you stop.
Somehow incentivizing people to stop misbehaving online would be ideal, but the threat of winding up in court is, in my opinion, the most workable solution. I believe the best chance we have to stop the nonsense online is to regulate ourselves.