Exploring EPIC’s Report on Outsourced AI Systems

May 9, 2024

Privacy Plus+

Privacy, Technology and Perspective

 

This week, let’s consider the Electronic Privacy Information Center’s in-depth analysis of government AI procurement, catchily entitled Outsourced and Automated.

Background

EPIC is an influential nonprofit research center focused on privacy and democratic values in the information age. On September 14, 2023, EPIC published its first in-depth look into state AI procurement: Outsourced & Automated.

State and local governments across the U.S. are increasingly utilizing AI systems developed by private companies to perform important government functions, even to aid or replace human decision-making. For example, governments are using AI systems in predicting crime, detecting fraud, determining the allocation of public benefits, generating government documents, and providing AI chatbots and agency assistants. This outsourcing and automation of important government functions to private companies raises concerns involving data use and privacy, accuracy, bias, reliability of outputs, and accountability.

For several years, EPIC has been investigating how state agencies acquire AI systems. Its report, based on contracts and records uncovered during EPIC’s investigation, reveals the opaque nature of AI procurement in government settings, identifies the widespread integration of AI in state operations, and details the major corporations that develop and manage these systems. Such companies include Deloitte ($193M+, 16 state contracts); Optum ($149M+, 2 state contracts); Accenture ($121M+, 1 state contract); Aisera ($99M+, 40 state contracts); Fast Enterprises ($88M+, 1 state contract); and others.

Key Findings

  1. Privacy and Cybersecurity Risks: The report emphasizes that from the moment they are born (and sometimes even before), humans are “tracked, digitized, and commodified.” Personal data is valuable, and AI vendors are using it in their systems in a manner that ultimately allows those AI systems to make decisions about you, often without your consent. This causes personal “autonomy harm” and privacy harm because third-party, automated systems -- not you -- decide how your data is used. Governmental AI systems also carry cybersecurity risks. The report notes that most AI contractors can access citizens’ personal data within the AI system. It calls for transparent data practices and stringent security measures to mitigate these risks.

  2. Accuracy, Bias, and Reliability Risks: The report notes that many government AI systems operate without human oversight, meaning that government users blindly rely on the systems’ outputs without understanding how they work. Lacking such understanding, government users also lack assurance that the outputs are accurate, unbiased, and reliable. The report recommends that agencies verify the quality of AI outputs and legal compliance of AI systems, by implementing vendor reporting requirements and audits, as well as technical measures to ensure accountability. It also recommends independent reviews of the data used to train the AI systems, stress tests to reveal bias, and regular system testing to ensure fairness and accuracy over time.

  3. Procurement Risks: The report finds no reasoned deliberation in many agencies when procuring AI systems. Often agencies do not seek outside expertise, and inappropriately rely on marketing materials provided by the vendor, which frequently contain claims which are disavowed in the contracts. Most state procurements involve requests for proposals (RFP), offeror bids (vendor submission of proposals), offer and acceptance, and amendments. The report recommends that RFPs require a detailed description of the AI system’s capabilities, intended uses, and limitations. It also suggests including three important contract terms: (1) a data ownership provision, (2) a fee schedule with a maximum possible contract price, and (3) a security audit provision. Third, to mitigate harms, the report also suggests that governments (a) establish processes for auditing AI systems and restricting their most harmful uses, (b) impose protective contract language, (c) increase transparency, and (d) pursue non-AI options. EPIC suggests consulting NIST AI Risk Management Framework for suggestions on AI system testing and evaluation requirements.

  4. Increasing Dependency Risk: The report highlights a significant uptrend in the reliance on outsourced automated systems across various sectors of government. It also cautions about the involvement of private companies in determining public issues of concern, like how welfare is distributed and to whom.

Our Thoughts

EPIC’s report is a thoughtful resource for those involved in the procurement of AI systems, data management, privacy law, and AI policy. It offers a snapshot of the current landscape and perspective on the challenges of procuring outsourced automated systems responsibly.

For further insights, you can read the report by clicking on the following link:

https://epic.org/wp-content/uploads/2023/09/FINAL-EPIC-Outsourced-Automated-Report-w-Appendix-Updated-9.26.23.pdf

---

Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.

Previous
Previous

An Appraisal of the FTC’s Ban on Non-Competes 

Next
Next

The Emerging Conflict in Data Privacy Regulation: FCC, FTC, and the Proposed American Privacy Rights Act