From every Google search conducted to an individual’s genetic makeup, AI systems are continuously collecting swaths of personal data from internet users, creating an individualized data profile that can be larger than five gigabytes or three million Word documents. Even though most of this data is collected through corporations, Edward Snowden, an American journalist and former CIA employee, demonstrated that the US government could access these corporations’ data systems and use them for a multitude of purposes.
As incremental privacy invasions progress throughout the US, implementation of personal privacy legislation, blocking the use of AI systems for mass surveillance, must be put forth. This kind of privacy legislation would lessen the potential of the US government determining a citizen’s life opportunities, which is the freedom of choice an individual makes to influence an outcome they deem necessary to achieve their personal goals( i.e., pursuing a certain degree, having children, work-life balance).
There are two competing definitions of privacy: control and access. The control argument states that a loss of privacy occurs when an individual loses control over their information (i.e., an entity has your information, but does not access it), whereas the access argument determines that loss of privacy occurs only when the information is accessed. As many deem invasion of privacy a matter of “access,” most are referring to a human accessing the data; however, as shown through Snowden’s leaks, the US operated AI systems that utilized control accounts to search for keywords, which, by definition, is accessing that information (human access vs. machine access). Therefore, both the control and access arguments are met through past US mass surveillance practices. To gain access one must have control, and policies that will curb government “control” of individual data accounts must be explored.
(1) The Right to be Forgotten
Because the US government can use backdoor encryption methods to access private corporation’s data, a logical first line of defense would be to implement policy much like the European Union’s General Data Protection Regulation (GDPR) to give consumers more control over their personal data.
Article 5 of the GDPR lays out the standards for personal data processing and collection: “clear and affirmative consent” must be given by the citizen, the right to understand who is processing the data and for what reason, the right to be forgotten and the ability to delete personal data, the right to transfer data to another provider, and the right to know when data has been hacked. The right to be forgotten and the ability to delete personal data is the GDPR’s inherent defense against potential government data control.
In context to US mass surveillance if “the right to be forgotten” were implemented in the US it would act as a defense mechanism allowing consumers to erase their data from corporate databases, thus erasing the control account the government can access. US citizen’s want more control of their data and who has access to it; according to Pew Research, 93% of Americans said that being in control of who can get information about them is important as well as 90% wanting more control over what is collected about them.
The focal point of this argument is that by combatting the root issue of corporations maintaining the control account of an individual, backdoor encryption by the US government would be worthless on a large scale. Implementing policy such as “the right to be forgotten” is not meant to eliminate the collection of consumer information; rather the goal is to provide citizens more control over how their information is handled, thus providing a defense mechanism to involuntary data collection.
(2) Data Pseudonymization
Data pseudonymization, a large component of the EU’s GDPR, is the processing of personal data in such a manner that the data can no longer be attributed to a specific subject without the use of additional information. Personal identifiers are stored separately from collected data allowing, if needed, to de- anonymize the data (this would only happen in extreme circumstances). Data pseudonymization should not be confused with data anonymization, as data anonymization processes data to make de- anonymization impossible.
Data pseudonymization would allow private organizations to continue collecting data on individuals to enhance their business but cuts short their use of private data. By segregating this data, a protective function occurs as data breaches in one system would not be as critical, due to an incomplete data set. For US surveillance activities, backdoor encryption would not provide the private data, therefore weakening the ability the US may possess to de-anonymize that data.
A conflicting viewpoint of this policy may contend that the US already possesses citizen’s private data and therefore wouldn’t require the private data from organizations to de-anonymize. Using cryptographic hash functions (a “lock” that requires a “key” to access the information) could be used by organizations to maintain the segregation. Further, a risk associated with this policy surrounds the US government’s ability to access the private data servers (regardless of data server separation), completely undermining the purpose of pseudonymization.
(3) Expansion of The Kyllo Standard
The Kyllo Standard was created through the US Supreme Court's decision in Kyllo v. United States, in which law enforcement, from public property, monitored an individual’s home using “sense-enhancing” technology (in this case, infrared cameras to detect heat from a marijuana growth operation), without a warrant. The Supreme Court’s decision stated this type of surveillance, that is not in general public use, constituted a Fourth Amendment search and was impermissible due to the lack of a warrant.
Kyllo v. United States focuses on law enforcement’s intentional present utilization of sense-enhancing technology. However, it does not address the use of AI systems, non-human agents, in choosing what or who to monitor. An example of mass surveillance using AI systems in a manner that would not be covered by the Kyllo Standard takes place within an episode of Black Mirror: Hated in the Nation. In this episode, small drones, made to substitute the extinction of bees, use “sense-enhancing” technology (i.e., facial recognition, thermal imaging) to monitor the entirety of the United Kingdom’s citizens in public and private. As AI systems become more advanced and prevalent within law enforcement, this specification needs to be addressed to curtail potential unconstitutional searches of citizens.
An expansion of the Kyllo Standard is needed to articulate that visual surveillance used to monitor a US citizen’s private property must be in the visible part of the electromagnetic spectrum. Also, AI systems using non-human visual enhancements (i.e., facial recognition, infrared), in public, must have probable cause to monitor and must defer from collecting data about non-targeted individuals.
Edward Snowden’s mass surveillance revelations demonstrate that data privacy rights in the US should be extended to ensure constitutional coverage and limit nefarious government activities. Even though certain data protection policies were implemented, the US can still compile controlled accounts and possesses the ability to access those accounts; in 2018 the NSA had to purge hundreds of millions of records collected since the USA Freedom Act’s implementation. Each recommendation is meant to deter mass surveillance through expanding consumer control (“The Right to be Forgotten”), dissociating personal identifiers from data (Data Pseudonymization), and limiting non-visual enhancement used by law enforcement (Expansion of The Kyllo Standard). As technological advancements in AI achieve new heights, now is the time to ensure that privacy remains an essential right within the United States.