DeepFakes are Challenging our Perception of Reality

Image by Geralt | Canva

Humans cannot live 100% in truth every day, no matter how honest we claim to be. We must escape our realities to keep our minds active and sane. We take vacations to places that look and feel nothing like home. We get absorbed in all types of media, including great fiction novels, and even watch shows and movies that create an alternate existence where we can aspire to be a superhero or just the average person that saves the day. While these escapes are a necessary part of who humans are, at some point, we have to come back to earth and face our reality. We have bills to pay, bosses to deal with, and even participate in society and its political process, even though it can be… stressful.

 

However, when the truth and perception of reality that helps anchor us as productive citizens begin to erode, we have a serious problem on our hands. Our actual reality is in the process of being shattered, and it’s tearing us apart. If we don’t stop this, it will doom society as we know it.

 

Outside of escapism, misinformation and disinformation have always been part of human existence. From the Olympic runner running against Usain Bolt telling themselves that they’re the best in the world and can beat the current fastest man on the planet, to generals on the battlefield convincing the enemy they’re weak in areas where they’re actually stronger, to kings and presidents trying to assuage a population due to a disaster or scandal, we’ve always just accepted this as normal operating procedure. These events could be considered one-off or isolated, even if they reoccur with some relative frequency.

 

What the past of humanity never had to deal with like we do, is the largest information delivery system ever created that bombards billions of us daily with information, real or fake. While this has been a boom for collaboration and communication, it’s also been the world’s intelligence services’ best friend. With most of us now getting our news in digital formats, mostly on social media, we are exposed to thousands of news stories daily. Add into this mix dubious to downright illegitimate “news” sources, and we end up with startling facts like 126 million Americans were reading Russian Intelligence propaganda disguised as news in the lead-up to the 2016 election. And it’s getting worse.

 

Now, we live in the era of DeepFakes, and with it comes even a more perilous situation for humanity; we are being taught not to believe things we see or hear, especially if it doesn’t align with our views. Recently, Tesla owner Elon Musk was in court, accused of making false claims in 2016 about his cars’ self-driving capabilities. Part of his defense was that because he’s a high-profile figure, he is susceptible to having DeepFake videos made of him that make him look like he said something he actually didn’t. In other words, maybe it’s not his fault because the plaintiffs in the case may not have been watching him all along. The resiliency of the legal system to verify evidence kicked in, and the judge has ordered Musk to be deposed to confirm that he was the one in the videos in question or if his team made them. Nevertheless, the seeds of doubt have been sewn in a way that can do long-term damage to our mental health. Two January 6 defendants have attempted to claim that the verified and legitimate videos of them wreaking havoc and destruction in the Capitol Building were DeepFakes. Both were convicted.

 

While the courts have had processes to ensure evidence is legitimate for centuries, the general public doesn’t have these checks. Recently, Media Matters released videos of Tucker Carlson being rather inappropriate, and the almost immediate response from his supporters on Twitter was these damning videos were all DeepFakes. This underscores the serious effect that confirmation bias has on a population. Imagine someone more important, like a president, being caught on video or audio doing something highly illegal. Now imagine all of his or her followers simply saying it never happened because what we saw or heard didn’t happen. On top of this, we think we can spot deep fakes, and therefore Tucker Carlson’s supporters know that those videos were fabricated. Except, we humans stink at spotting DeepFakes, and study after study has proven that.

 

We are entering the era of the “Liar’s Dividend.”

 

This concept allows a liar to deny the truth more easily. Authors Bobby Chesney and Danielle Citron wrote:

 

“Ironically, liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes. Imagine a situation in which an accusation is supported by genuine video or audio evidence. As the public becomes more aware that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes. Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence. This skepticism can be invoked just as well against authentic as against adulterated content.”

 

This benefit “flows, perversely, in proportion to success in educating the public about the dangers of deep fakes.” If left to an uneducated population, this situation continues to expand and grow the political division already pervasive in American society until nothing one says can be believed by the other. Now is the time to start to wake up and face this reality. We cannot afford to take an extended vacation away from literal facts.

 

So with that in mind, here are a few possible ways to combat this ever-growing onslaught of fake reality:

  1. Require all social media platforms to use and continuously improve on, detection technologies that could spot DeepFakes and block them more effectively than a human could. It really does take an AI to defeat and AI and while there would be some false positives, this would overwhelmingly cut back on disinformation to a much higher degree.
  2. Do not allow AI to choose what or what not goes viral. Facebook’s AI notoriously promoted anger filed political vitriol because it got their platform more clicks and therefore more ad revenue. That is horrific and actually helped foster a literal genocide. That needs to stop in all forms.
  3. Create and sustain an actual verification system just for advertisers that requires them to validate who they are and does not give them the ability to simply pay to play. When I originally went through the Twitter verification process to get the blue checkmark, before Elon Musk was interested in buying the platform, I actually had to send in a copy of my driver’s license and other information to validate who I was. Why can’t that apply to any account who wants to advertise and since, thanks to point number two, the AI cannot make things go viral we have an confirmed playing field for political ads and more.
  4. Finally, educate everyone on this. We have seen successful campaigns thwart disinformation. This, though, would be on the possibilities and probabilities of DeepFake versus real media. We should naturally be skeptical but we cannot allow the conspiracy theorists to hijack the term “research” from us anymore.

 

In a war of DeepFake information, the only winners are the liars. Hopefully, the actual truth still has a chance.

_________________________________________________________________________________________________________________

Nick Espinosa

An expert in cybersecurity and network infrastructure, Nick Espinosa has consulted with clients ranging from small business owners up to Fortune 100 level companies for decades. Since the age of 7, he’s been on a first-name basis with technology, building computers and programming in multiple languages. Nick founded Windy City Networks, Inc at 19 which was acquired in 2013. In 2015 Security Fanatics, a Cybersecurity/Cyberwarfare outfit dedicated to designing custom Cyberdefense strategies for medium to enterprise corporations was launched.

Nick is a regular columnist, a member of the Forbes Technology Council, and on the Board of Advisors for both Roosevelt University & Center for Cyber and Information Security as well as the College of Arts and Sciences. He’s also the Official Spokesperson of the COVID-19 Cyber Threat Coalition, Strategic Advisor to humanID, award-winning co-author of a bestselling book, TEDx Speaker, and President of The Foundation.

We welcome for consideration all submissions that adhere to three rules: nothing defamatory, no snark, and no talking points. It’s perfectly acceptable if your view leans Left or Right, just not predictably so. Come write for us.

Share With Your Connections
Share With Your Connections
More Exclusive Content
The Latest News from Smerconish.com in Your Inbox

Join our community of over 100k independent minds

This field is for validation purposes and should be left unchanged.

We will NEVER SELL YOUR DATA. By submitting this form, you are consenting to receive marketing emails from: Smerconish.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Aweber

The Latest News from Smerconish.com in Your Inbox

Join our community of over 100k independent minds

This field is for validation purposes and should be left unchanged.

We will NEVER SELL YOUR DATA. By submitting this form, you are consenting to receive marketing emails from: Smerconish.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Aweber

Write for Smerconish.com

Thank you for your interest in contributing to Smerconish.com Please note that we are currently not accepting submissions for Exclusive Content; we appreciate your understanding.