Facial Recognition Banned in San Francisco

Privacy Plus+

Privacy, Technology and Perspective 

Facial Recognition Banned in San Francisco:  This week, San Francisco became the first US city to ban the use of facial-recognition technology by police and local government agencies.

While valuable for use in responsible security -- imagine receiving credible warning that known terrorists are about to disrupt a game at Candlestick Park -- the potential for abuse is evident. China is reportedly using facial-recognition technology to track virtually every aspect of its citizens’ lives (even through special sunglasses worn by police officers), and particularly to racially-profile, track, and monitor the Uighurs, a mostly Muslim minority. 

https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html

In a recent article in MIT’s Technology Review, however, Angela Chen notes that San Francisco’s ban on use by local government fails to address an even “thornier problem” — the use of these tools by private companies. Seeing who’s at the door before you open it is one thing; so might be monitoring access to secure, sensitive areas like hospital operating rooms, where access must be strictly controlled. But not every private use may be responsible, benign, or at least not creepy.

https://www.technologyreview.com/s/613536/facial-recognition-ban-san-francisco-surveillance-privacy-private-corporate-interests

You might consider Facebook’s privacy policy (which Facebook actually calls its “data policy”) on this point:  https://www.facebook.com/policy.php:

Face recognition: If you have it turned on, we use face recognition technology to recognize you in photos, videos and camera experiences. The face-recognition templates we create may constitute data with special protections under the laws of your country….

Facebook’s data policy links to a page that describes how its facial-recognition technology works: Facebook builds a database containing each user’s “profile picture and photos and videos that you’ve been tagged in…”  You can read more here: https://www.facebook.com/help/218540514842030?ref=dp.  (Urbi et orbi:  We know what “photos” and “videos” are.  But what on earth are “camera experiences?”)

So Facebook, to name only one, is building huge databases based on facial recognition.  Our question:  what then? 

FRT isn’t a moral agency, in and of itself.  It’s just a tool, like a hammer, pick-axe, or camera.  And like any tool it can be used any purpose within its technical reach, both benign purposes and destructive ones. It is a question of control – whose, why, and especially for the democratic process, how. 

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

Previous
Previous

It’s Time to Freeze Facial Recognition

Next
Next

Privacy meets Antitrust