ChatGPT is an Autocrat’s Best Friend

Humans have only three jobs on this earth: The first is to learn, the second is to cope, and the third is to thrive.

 

Not everyone achieves the third bit, though plenty of us do. To thrive in an uncertain time, it takes a deeper understanding of the forces continuously at play around us. For example, I understand that there is a framework of laws (both written and unwritten) that I must adhere to in order to drive a vehicle and arrive safely. However, even within a simple common framework like traffic laws, some boundaries can be pushed or even broken within reason. I can speed 10 mph over the speed limit, and statistically, I won’t be caught by law enforcement. I couldn’t, though, conjure up my inner anglophile and start driving on the left side of the street in the USA without causing serious trouble (or harm) for those around me. We learn these guardrails well as we grow up surrounded by these systems. Some don’t, though, and that’s where this issue begins.

 

Some of us are always looking for ways to “hack” the system, whether piloting a vehicle or running an entire nation. Democratic norms have been under attack worldwide by those wanting to hack the system to rule in a way antithetical to the open and free standards we count as guardrails. It’s not easy to sway an entire population that cherishes freedom and makes them think they’ll have even more autonomy under a stricter autocratic government.

 

That just got a lot easier, thanks to the ChatGPTs of the world.

 

OpenAI’s ChatGPT, Google’s Bard, and other Large Language Models (LLMs) have been all the rage for the past few months. There is zero doubt that these LLMs are the next generation of what the internet will be. They’re not perfect, but they are designed to improve– and improve they will. I’ve spent a good deal of time in the past several weeks testing ChatGPT in various ways, seeing if I could trip it up and learn its limitations to try and gain more accurate responses. For some tasks, it’s amazing. I asked it to write a very complex Risk Management Procedure document with multiple parameters for specific compliance certification, and it did an excellent job delivering a passable result.

 

However, one of its most deeply concerning aspects is that I got it to lie to me willfully.

 

I asked if it knew who “Nick Espinosa” was, but it didn’t. There are a lot of “Nick Espinosa’s” out there. I followed this up by mentioning aspects of my public bio to it one point at a time (a book I co-authored, TEDx Talks I’ve done, articles I’ve written that are published in Forbes and on Smerconish.com, etc.). ChatGPT continued to say, given all that information, that it did not recognize me.  Ego, lightly bruised, I asked ChatGPT one final question, and here is the literal question and response:

 

 

That seemed like a great answer to me, so I logged out of ChatGPT and then logged back into the platform using my other ChatGPT account and had this exchange with the A.I.:

 

 

So ChatGPT initially told me, on my original account, what it “thought” I wanted to hear; that it would tell anyone who asked about me the information I had fed it. But clearly, that didn’t happen. So I pressed the A.I. further on my second account:

 

 

Interesting how it then tried to deflect. To put this into context, LLMs are designed to not only give the user informational responses but also to anticipate what the user expects as a result. It may have sensed I was annoyed on my first account (I wasn’t) that it didn’t know who “Nick Espinosa” was, so it gave me a final answer to try and assuage me that it would indeed tell others. On my second account, it once again lied initially that it was even the LLM involved in the answer. For the record, OpenAI has stated that user conversations are not used by the A.I. to engage with other users but are used to help train and improve it. So I pressed it further:

 

 

It took several questions, prodding it and clarifying, for it to finally admit it made a mistake. While this example is rather banal, it tells about the possible malicious applications for which this could be used.

 

ChatGPT is not the only LLM out there, and they’re all being developed as fast as possible. However, they’re not fully mature yet; Google lost $100 billion in market value when their highly touted LLM competitor known as “Bard” provided an incorrect answer… in Google’s demo video that it broadcasted to the world. China and other countries are ready to put out their own LLMs, and it’s shaping up to be the next “moonshot” for the world. However, with all of the wonderful ways an LLM can help us, we can’t ignore the downsides, and there are many.

 

The one serious issue that needs more public recognition is that the LLMs are primed to drop a supercharger into the disinformation world significantly, forever changing how we approach legitimate and accurate information. So let’s discuss why autocrats and authoritarian regimes will love information warfare in the future even more than they love it now.

 

Consider for a moment that a primary goal of an autocratic leader is to keep and maintain power for the rest of their natural life. This means there are no established rules they won’t break to achieve this goal. Using the courts and the military to establish and maintain power while jailing or executing opponents is part of the internal struggle in this situation. This, however, doesn’t typically happen overnight in a country. It’s often a long process of unmooring the democratic safeguards until it is too late. The press becomes the “Enemy of the People,” the courts are packed with the leader’s followers, educational institutions have their free thought values stripped away, and more. All of this is unachievable without a massive propaganda marketing campaign. Italy, Germany, Venezuela, North Korea, and more have all used this playbook rather effectively in the past via radio, print, and television. Now, the internet has changed that game forever.

 

Authoritarians are no longer isolated; they’re actually forming alliances. The free and open social media platforms have been breeding grounds for disinformation campaigns that keep their populations paranoid and in check while their international democratic enemies have their own societies destabilized through similar disinformation. The United States has seen what happens when an unhealthy percentage believes disinformation of the population; trust breaks down. Without trust in each other, the ballot box, and democratic norms, it is not possible to be a unified society. Foreign intelligence agencies tasked with this destabilization effort have deployed millions of bots, or computer-controlled fake accounts, on social media platforms to sway the A.I. running the site into making disinformation go viral. At the moment, it’s getting easier to spot these bots and eradicate them, but LLMs will make this much worse.

 

Imagine an advanced A.I. told to create a disinformation campaign to spread lies that a presidential candidate murdered someone in their past and covered it up. This LLM could take the following steps intuitively since it has been fed every single piece of information on the history of disinformation campaigns:

 

  • Create a plausible scenario for how the fictitious person was murdered.
  • Write hundreds of authoritative-sounding articles on this fake event, using a tone that would appeal to the confirmation bias of the targeted population.
  • Create dozens of fake “news” sites to publish the articles on and tag the proper metadata to start getting these sites ranked higher by search engines.
  • Have intelligent bot accounts start posting these articles all over social media, all using different phrasing with a tone of anger over this newly discovered “information.”
  • Engage with actual humans who refute the claims while also engaging with its other bots in conversations on the same post to make it look like most “people” agree this happened.
  • Rinse and repeat a million times daily until even the initially unpersuaded skeptics question their correct stance.

 

And, dear reader, if you think the above is not plausible, then read a basic example I got ChatGPT to write for me instantly:

 

 

This is what the future of disinformation looks like. While the United States can impose limitations on using the LLMs created here, will authoritarians with their own LLMs limit theirs when that’s antithetical to their best interests? They have already shown they won’t, even going after the younger populations in democratic countries in other ways. If we don’t educate our population on this impending disaster, we’ll probably see JFK Jr run for president in 2028 and win. So let’s get to educating everyone yesterday.

_____________________________________________________________________________________________________________________

Nick Espinosa

An expert in cybersecurity and network infrastructure, Nick Espinosa has consulted with clients ranging from small business owners up to Fortune 100 level companies for decades. Since the age of 7, he’s been on a first-name basis with technology, building computers and programming in multiple languages. Nick founded Windy City Networks, Inc at 19 which was acquired in 2013. In 2015 Security Fanatics, a Cybersecurity/Cyberwarfare outfit dedicated to designing custom Cyberdefense strategies for medium to enterprise corporations was launched.

Nick is a regular columnist, a member of the Forbes Technology Council, and on the Board of Advisors for both Roosevelt University & Center for Cyber and Information Security as well as the College of Arts and Sciences. He’s also the Official Spokesperson of the COVID-19 Cyber Threat Coalition, Strategic Advisor to humanID, award-winning co-author of a bestselling book, TEDx Speaker, and President of The Foundation.

We welcome for consideration all submissions that adhere to three rules: nothing defamatory, no snark, and no talking points. It’s perfectly acceptable if your view leans Left or Right, just not predictably so. Come write for us.

Share With Your Connections
Share With Your Connections
More Exclusive Content
The Latest News from Smerconish.com in Your Inbox

Join our community of over 100k independent minds

This field is for validation purposes and should be left unchanged.

We will NEVER SELL YOUR DATA. By submitting this form, you are consenting to receive marketing emails from: Smerconish.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Aweber

The Latest News from Smerconish.com in Your Inbox

Join our community of over 100k independent minds

This field is for validation purposes and should be left unchanged.

We will NEVER SELL YOUR DATA. By submitting this form, you are consenting to receive marketing emails from: Smerconish.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Aweber

Write for Smerconish.com

Thank you for your interest in contributing to Smerconish.com Please note that we are currently not accepting submissions for Exclusive Content; we appreciate your understanding.