Freezing Facial Recognition – Let’s Revisit

Privacy Plus+

Privacy, Technology and Perspective

Freezing Facial Recognition – Let’s Revisit. On May 25th, we urged, “it’s time to freeze facial recognition.” In our post, we argued that the weaknesses in the data supply chain, cybersecurity risks, and complexities associated with the commercialization of biometric information warrant a moratorium on the use of such technologies by both governments and private companies, pending legislation regulating their use to ways consistent with our nation’s values. You can read our post here:

https://www.hoschmorris.com/privacy-plus-news/privacy-plus-may-25-2019

This week, the New York Times’ Privacy Project similarly advocated that “Congress must declare a national moratorium on the use of face-recognition technology until legal restrictions limiting its use and scope can be developed.” You can (and should) read Clare Garvie’s NYT opinion piece and watch her video op-ed: “You’re in a Police Lineup, Right Now” by following this link:

https://www.nytimes.com/2019/10/15/opinion/facial-recognition-police.html

Focusing on the use of face-recognition technology of governments, Ms. Garvie highlights the fact that such technologies may violate our constitutional rights and are certainly subject to manipulation and bias.

The technology has improved (one study found has improved by a factor of twenty times between 2014 and 2018), but it is still nascent and prone to error. Recently, the American Civil Liberties Union (“ACLU”) tested Amazon’s facial recognition software and found that it falsely identified 28 members of Congress as having been arrested for a crime. The false matches included such conservative stalwarts as U.S. Senators John Isakson of Georgia and Pat Roberts of Kansas, but they correlated disproportionately to people of color. To read about the ACLU’s findings, you can follow this link:

https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

The disturbing nature of face-recognition technology has been explained even more thoroughly by The Guardian’s Ian Sample. And you should read his article, “What is facial recognition - and how sinister is it,” which is available here:

https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it

Mr. Sample writes, in part, about China’s use facial recognition for racial profiling, tracking, and controlling its population of Uighur Muslims. China has essentially weaponized face-recognition technology, using it to apprehend and intern as many as one million people.

Closer to home, the use of face-recognition technology by the Detroit police department (which we highlighted in our May 25th post) appears now to be focused on public housing projects, which has understandably prompted a backlash:

https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html

(Incidentally, as of the date of this post, Detroit’s service provider, DataWorks Plus, still does not have a privacy policy posted on its website.)

Faces aren’t all. Have you ever noticed the inclusion of “gait analysis” in the California Consumer Protection Act (“CCPA”)? Artificial neural-network technology can also be used to identify people by the way they walk. As Mr. Sample points out, this is very interesting to police, because it can be done at a distance without the subject’s cooperation (and, we note, without their knowledge and consent, or a warrant).

We recognize that this technology’s safety-and-protection impact can be immediate and profound, publicly and privately. Taylor Swift is said to use it at concerts to protect herself from stalkers in her audiences. The following article from Rolling Stone will tell you more:

https://www.rollingstone.com/music/music-news/taylor-swift-facial-recognition-concerts-768741/

Regardless, we can imagine a compelling use of such surveillance technologies in domestic-abuse cases for the purpose of enforcing restraining orders against abusers. But whatever the benefits, the risks that this technology poses to people of every age, race, and temporary mood – risks of police invasion, abuse, techno-racism, and the burden of yet another layer of unseen oversight over our lives – require that it be used only with appropriate transparency, safeguards, and accountability.

We repeat our May 25th call for a moratorium on the use of neural-network technology for personal recognition, while the nation discusses and develops legislation covering the development and use of face-recognition and other biometric identifying technologies by both governments and private companies.

This matter is urgent. Congress should establish a select committee and set a deadline – say, one year.

And for members of the committee? Why, the 28 falsely identified Senators and Congressmen, of course!

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

Previous
Previous

I Can Hear You Now: The Growing Danger of Voice Recognition

Next
Next

Patient Dignity and Artificial Intelligence (AI) Prosperity