The Digital Privacy - Artificial Intelligence Conundrum
Privacy Plus+
Privacy, Technology and Perspective
The Digital Privacy - Artificial Intelligence Conundrum. This week, let’s consider the future of U.S. privacy on the eve of not just the upcoming election, but also of the largest IPO in history, which is set to be recorded by Ant Group next week.
Ant Group’s IPO, Artificial Intelligence (AI), and Privacy Obliteration:
On November 5th, Ant Group — the financial technology company affiliated with the Chinese e-commerce giant Alibaba (BABA) — is expected to raise over $34 billion, surpassing even the $29.4 billion Saudi Aramco's IPO last December. Ant is one of the biggest technology firms in the world and provides the biggest online payment platform in China, called Alipay. Much of Ant’s growth has been driven by the increasing pace of digitization. For example, according to the following article in Forbes, Alipay, the mobile payments app Ant operates, had more than one billion annual active users as of June, and processed $17 trillion worth of transactions in mainland China over the course of a year:
That is only where it starts. Alipay’s payment processing is a fraction of Ant’s business, which also includes money market funds, loan services, an AI-driven credit-scoring system which rates the trustworthiness and creditworthiness of its users based on the volumes of data Alipay collects, and more. Many (if not all) of Ant’s services rely on huge amounts of data collected through Alipay, including location information, information about whether users pay their bills on time, and more. Such massive data-collection technologies are privacy-intrusive, if not privacy-obliterating altogether.
But one blunt truth is that whatever country leads in AI, soon to be powered by quantum computing – deployment of which (at scale) is happening in the next few years – is likely to lead in nearly everything else. A second blunt truth is that Big Data is the essential fuel for leadership in AI, which must continually process huge swaths of data in order to improve and develop. And the third blunt truth is that China’s massive, nationwide social-monitoring and “social credit” system, and its integrated databases of country-wide personal health information, provide China with some of the largest Big Data resources in the world.
The challenge, then, is how to accommodate privacy wishes with the urgent, national need to lead in AI.
GDPR and its Impact on AI:
In this context, we hear arguments that the United States—China’s only peer in AI—should consider following the European Economic Area (EEA’s) lead by adopting, at least in part, the European Union’s stringent General Data Protection Regulation (GDPR) throughout this country, in furtherance of protecting the privacy of personal data. Even though we strongly support consumer privacy rights, like those protected under the GDPR, we disagree.
In our view, GDPR-like provisions largely relate to administrative controls and privacy notices, which are specific and detailed. But despite all the “plain English” requirements, their very detail causes them (as Winston Churchill once described a 50-page memo) to “defend [themselves] against the risk of being read.” Not being read (at least by the data subjects who are their supposed beneficiaries), they are illusory – and not adequate, or even helpful, in protecting digital privacy. Further, it seems to us that we are still a long way away, in this country, from a national consensus on what privacy ought to mean and in what circumstances. The regulatory regimes are well intended, and we certainly agree that neither fraud nor surprise should be tolerated, much less encouraged. But we also conclude that the GDPR’s strong regulatory scrutiny, detailed regulation, and hefty fines may actually deter innovators from harnessing the power of Big Data and thus hinder them from developing the full power of AI.
How to Balance Privacy and AI:
Insofar as preserving privacy in the United States, we are far more focused on the security and use of personal information, and on the companies that utilize AI over big data of personal information. We question whether any law can fully address the scope of the privacy problem posed by AI solutions.
After all, algorithms are not transparent, and efforts to address the problem of enabling transparency—like the UK’s Project ExplAIn— do not create meaningful transparency or accountability with respect to the processes, services and decisions delivered by AI. More on Project ExplAIn is available by clicking on the link that follows:
We are concerned that if the U.S. wants to compete with China – and it had better -- it must foster an AI-friendly business environment. Creating a complex legal framework related to privacy in this context raises numerous policy questions. From a legal perspective, one thing is sure: our democracy is served best by those who deploy AI responsibly. With that in mind, shouldn’t companies entrusted with citizens’ personal information have a duty to consider the public’s interest as well as their own profit when using AI solutions? And wouldn’t it be constructive if the privacy problem here could be solved technically—perhaps with a new solution that provides data subjects with the ability to reclaim their personal information and the “insights” derived from it?
---
Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.