Clear Views about Clearview AI
Privacy Plus+
Privacy, Technology and Perspective
Clear Views about Clearview AI. This week, we highlight the Vermont’s Attorney General’s recently-filed lawsuit against Clearview AI (“Clearview”). For all of the details, we recommend that you read section II of the complaint in full. A link to the complaint follows:
https://ago.vermont.gov/wp-content/uploads/2020/03/Complaint-State-v-Clearview.pdf
Briefly, Clearview is alleged to have commercialized facial recognition technology by amassing an enormous database of photographs scraped from websites, including Facebook, LinkedIn, Google, and Twitter, in violation of those websites’ terms of service. Law enforcement agencies and others have reportedly used Clearview’s app to identify individuals, including children, within seconds and without their notice or consent.
Clearview is already subject to at least one class suit filed under the Illinois Biometric Information Privacy Act, alleging that Clearview “actively collected, stored and used Plaintiffs’ biometrics — and the biometrics of most of the residents of Illinois — without providing notice, obtaining informed written consent or publishing data retention policies.” You can read the text of that complaint by clicking on the following link:
https://www.scribd.com/document/447080068/Clearview-AI-BIPA-Lawsuit
Both complaints seek statutory penalties, other monetary damages and injunctive relief for residents of Vermont and Illinois respectively. The Vermont complaint also specifically seeks a disgorgement of Clearview’s profits and a deletion of all Vermont consumers’ photographs and facial recognition data in the Clearview database.
As governments around the world become increasingly reliant on mass surveillance, including, the use facial recognition technology, to contain the spread of COVID-19, they must ensure that the technologies are reliable, and that they are transparent with their citizens about the technologies that are being used. Before any government (or company) is permitted to exploit any facial recognition technology tool, in particular, it should do its diligence, inquiring into these subjects, among others:
(1) Consent. Realize that all algorithms underlying facial recognition tools must be trained on databases. The personal data in those databases, including photos and individual names, should be obtained by lawful and fair means and, where appropriate, with the knowledge and consent of the data subject.
(2) Data quality. Where a database exists, it is only useful if the information in it is relevant to the purpose for which it is to be used, accurate, complete, and up-to-date. For example, facial recognition database might only need names and photos. What personal data does the database contain? How is the accuracy of that data ensured?
(3) Privacy. The data subjects should be told why their personal data is being collected at the time it is being collected; should be allowed to consent (or not) to collection and use for certain defined purposes, shared with certain defined parties, and retained for a certain defined time. What kind of privacy notice was provided to the data subjects? Who provided it? And when was it provided?
(4) Algorithm Accuracy. The algorithm(s) underlying the facial recognition tool should be reliable and able to identify people of different sexes, ages, and racial backgrounds accurately. What kind of testing has been performed on the tool to eliminate bias in its outputs?
(5) Security. Both the database and the tool should be protected by reasonable security safeguards to mitigate risks such as loss or unauthorized access, destruction, use, modification or disclosure of personal data. What are the security controls in and around the database? Was the tool developed using privacy- and security-by-design? Here, especially consider encryption, access controls, testing and monitoring, and policies, procedures, and security training that may be in place.
(6) Transparency. There should be a general policy of openness with respect to the processing of personal data. Data subjects should know who controls the database and how and by whom the tool is being used. If you are a governmental agency considering the use of a certain tool (and any technology tool, in fact), review the vendor’s privacy notice and ask questions.
In short, privacy and security considerations should factor prominently into procurement decisions involving any technology that includes information systems and collection, and no privacy-invasive technology should be deployed absent documented assurance that privacy and security issues have been identified and adequately addressed.
---
Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.