YouTube Lets Me Refine My Hate to a Razor-sharp Pitch

_103895893_gettyimages-1001511126.jpg

A prior article on this site highlightedhow Internet algorithms insidiously tailor the information provided to us based on what we have viewed previously, pushing our buttons and fostering bias. We don’t see objective information from sites that do this, we see information that reinforces a prior point of view, whether right or wrong.

This point has been effectively underscored by a compelling series of Twitter posts from author Ferret Steinmetz. Excerpted below – with minor edits for language – he initially describes the problem in the context of a video game, but read on to comprehend the larger political point.

You know what I hate? BioWare's new game, Anthem. I've never played it, yet I feel BioWare betrayed me; I wanted a new Dragon Age, and what I got was this generic loot-and-shoot.

YouTube lets me refine my hate to a razor-sharp pitch.

Because after I watched a couple of bad reviews on Anthem, feeling justified in my nerdy pissiness, YouTube kept suggesting hateful reviews of Anthem. The suggested reviews stretched into long essays, trashing every crappy aspect of its design...

...and with each video I found fellow nerds nursing a grim hate for this game, this disappointing game, this [lousy] game. YouTube kept flagging new reasons for me to despise this stupid game, and those reasons seemed both outrageous and interesting, so hey, might as well watch.

Even now, YouTube has three "Here's a NEW reason why Anthem is trash!" videos lined up for me. And I wasn't even trying hard. Neither was YouTube. I simply revealed a minor bias, and YouTube did its damndest to reinforce my worldview. Which is the problem.

If you hate something, YouTube (and its creators!) realizes it's gotta find something new to hook you, so the algorithm and the creators team up to find a fresh and spectacular take on Why That Thing You Disliked Was Also Awful, and man, can they deliver.

So you have these tasty nuggets of "My own biases, confirmed!" brought hot and piping to your door, and unless you specifically avoid those videos, YouTube will do its damndest to deepen and widen those biases.

Now: What happens when it's not just a videogame? What happens when it's not "Let's crap on Anthem," but "Let's crap on Muslims?" I mean, we all have biases. But when there's a platform that, unconsciously, sets out to provide you evidence for whatever you desire, well...

I think we're seeing the effects of YouTube and other social media spreading out. And it's not pretty. And nobody cares, because yeah, maybe these [bad actors] go out and shoot someone, but they also brought in advertising dollars and eyeballs, so that's okay.

YouTube, Facebook, Twitter, et al, were not designed to foster hatred, and did not create zealotry. They have “merely” exacerbated the problem significantly. 

Pushed into silos that provide only supporting information, we become convinced that we must be right, and therefore anyone who disagrees must be wrong. Any capability for accommodation, or seeking to understand why “the enemy” holds the views they do, become impossible. Nothing gets done because the two sides are busy conducting warfare.

At its worst, hate builds upon hate to the point where individuals are brainwashed into thinking that extreme measures are required. In describing the Christchurch, New Zealand mass murder, The New York Times says “The attack was teased on Twitter, announced on the online message board 8chan and broadcast live on Facebook.”

Extremism and unwillingness to see the world with a clear lens is a problem whether or not an individual becomes a murderer. Filmmaker Alex Gibney uses the term “prison of belief.” An individual gets so wrapped up in thinking something is true, and wanting so much for it to be true, that they lose sight of what is actually true. This is what the Internet has become for too many—a dungeon of reinforcing, hurtful ideas that “imprisons” the mind into believing what cannot be objectively supported. Because of the continual reinforcement, there is little means for escape.

A significant minority appear to have fallen into the trap, with some becoming so enraged that they resort to violence.

Could the Internet robots be redesigned to provide links to objective, offsetting information that users could access if they chose? Could they be modified to provide a “grade of truthfulness” to sites or postings that appear suspect? Solutions will not be easy and if not carefully considered could have their own consequences. However, the problem is serious enough that improvements should be actively considered.