It’s Time to Freeze Facial Recognition

Privacy Plus+

Privacy, Technology and Perspective

 It’s time to freeze facial recognition:  This week, we face off against facial recognition again, but this time, we have been confronted with a recent Georgetown Law report that law enforcement in Chicago and Detroit have purchased facial recognition systems that can make real-time surveillance a reality:

https://www.americaunderwatch.com/

According to the report, each city has contracted for facial recognition systems with a technology service provider called DataWorks Plus.  At your own risk, you can take a look at DataWorks Plus’ website here (http://www.dataworksplus.com/).  But what we want you to see on that website is that as of the time this note is being published, something is missing: A privacy policy.

The Federal Trade Commission (FTC) has repeatedly brought cases against companies for failing to post a privacy policy on their websites.  Section 5 of the FTC Act (15 USC §45) prohibits “unfair or deceptive acts or practices in or affecting commerce,” and a company violates that law when it fails to provide a clear, understandable, and complete notice about its information practices.  Interestingly, a lack of vendor due diligence may also violate Section 5 of the FTC Act. 

Here, we wonder if Chicago or Detroit even noticed that there was no privacy policy on their vendor’s website.  If they could not read it on their vendor’s website, perhaps they found it another way?  Did either city ask, or perhaps even realize, the privacy implications of the facial recognition systems they were buying? 

Here are some questions Chicago and Detroit might have asked this vendor:

  • - Why don’t you have a privacy policy on your website?

  • - What facial recognition data passes through your hands? Where is it coming from, where is it stored, what do you do with it, and where is it going?  When?

  • - Have you performed a privacy impact assessment related to the facial recognition system that the city would like to license from you?

  • - Have you ever engaged a third-party information security expert to assess the security risks associated with the facial recognition system you are licensing?  What were its conclusions?

  • - Are you hosting any critical systems or sensitive data in connection with this technology? What and where?

  • - Are you outsourcing any functions to third-parties/subprocessors?  Which functions, which third-parties/subprocessors, and under what terms and conditions?

  • - Do you maintain cyber security policies and procedures based on industry standards, under NIST or some other recognized assessment framework?

The great irony, of course, is that Chicago is part of Illinois, which is one of only three states that have regulated the use of biometric identifiers—Texas (Tex. Bus. & Comm. Code § 503) and Washington (RCW § 19.375) being the other two.  Under the Illinois Biometric Information Privacy Act (740 ILCS 14) (“BIPA”), affirmative consent is required in order for companies to collect biometric markers from their customers, and BIPA creates a private right of action for customers against violators. 

However, law enforcement in Chicago is not subject to BIPA, and presumably it intends to surveil the city’s residents, including children, without consent.  Maybe Chicago (and Detroit) will rely instead on sovereign immunity and their necessary police powers?  But even those have, or at least should have, appropriate limits and boundaries.  What is not clear is what – if any – guidelines will direct how Chicago and Detroit law enforcement may use these technologies. 

Here, then, are some questions Chicago and Detroit might now ask their police departments:

  • - Will you first require a warrant?  Or do you regard this as nothing more than a digital library of every “wanted” poster in the post office, for whom you’re already supposed to be “scanning the crowds?”

  • - Warrant or no, will you employ them to search for known individuals in a known setting or circumstance (e.g. to find fugitives in a bar, or known terrorists entering an arena), or to identify persons already of interest from other surveillance?

  • - Will you use them without limits, for instance to identify everyone who was present at a particular event?

  • - How will you mount the cameras – on utility poles? On sunglasses worn by the police, as police in China are said to do? Is there anywhere you will not mount cameras?

  • - Will you compile databases?  What do you plan to do with them?  Where are they being stored?  How long is the data in those databases being retained?  Have you appropriately restricted access to the data?

  • - Who will make these determinations?

And for that matter, here are some questions the citizens of Chicago and Detroit might ask themselves (as we all might well ask, since local governments are increasingly involved in smart-city projects and otherwise considering the use of facial-recognition technology): 

  • - For our own protection, will we accept facial recognition technology?  We accept and even welcome security cameras in much of public life.  We place them on our homes, businesses, doors, alleys, and much else. But facial recognition technology, so we can see not only that someone is there, but who it is — is that really just a routine, even a beneficial extension of something we already expect? 

  • - What restrictions do we want on facial recognition technology?  Put another way, as we move from one end of the spectrum of “use” toward the other, when and where will we cross the point where we are simply creeped out? (Call it the “Creep Out Point.”  We think the Creep Out Point is the key – the veritable Rosetta Stone -- to understanding privacy.)

Not for the first time, technology is outrunning the debate.  That Chicago and Detroit can contract with a vendor apparently lacking even a posted privacy policy suggests that privacy concerns may not have featured at all in the decision process, much less loomed large as they did in San Francisco, which recently banned their use by local agencies. For a moment, never mind artificial intelligence; what is needed now is plain old human intelligence — a brain! — diligently and thoughtfully applied in order to find principles by which to weigh competing values.  Quickly. 

Accordingly, this week, we call on Congress to halt use of facial recognition systems, including biometrics, temporarily while it focuses on this debate, with clear exceptions for defined police use and national security. This country needs a serious debate, leading to legislation that restricts and regulates the use of facial recognition technology and other biometric identifiers (defined broadly), by both governments and private companies like Facebook (and maybe others) who are commoditizing our faces. 

We strongly doubt that notice and consent will be enough.  It may be adequate in some circumstances, much as notices that you may be photographed in the crowd are common as you enter many arenas.  It will not be adequate if it is used generally, such as to surveil city pedestrians just to see who’s walking by.

You can read more here:

https://www.wired.com/story/facial-recognition-regulation/

https://www.washingtonpost.com/business/facial-recognition/2019/05/23/b9517b06-7d92-11e9-b1f3-b233fe5811ef_story.html?utm_term=.b566f8c14e7f

 Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠. 

Previous
Previous

Make Privacy Nice. Like the Canadians

Next
Next

Facial Recognition Banned in San Francisco