In the Dark: Understanding the Implications of “Shadow AI.”
June 20, 2024
Privacy Plus+
Privacy, Technology and Perspective
This week, let’s consider the growing trend of “Shadow AI.”
What is “Shadow AI”?
You may already be familiar with the term “Shadow IT”—systems or solutions procured without an organization’s official approval or oversight, often bypassing established governance, procurement, and compliance protocols—but you may be less familiar with “Shadow AI.” With the rapid rise of generative AI, especially, “Shadow AI” is already a pervasive trend that’s only growing.
“Shadow AI” refers to the unsanctioned use of AI tools or systems within an organization, often without the knowledge of the organization’s information technology (IT) and legal departments and outside of the organization’s governance framework. Think of an employee who inputs proprietary corporate data into a personal ChatGPT account, and you’ll identify a risk associated with “Shadow AI.” Other instances of “Shadow AI” may include the unsanctioned use of AI by vendors without the knowledge or authorization of their client—perhaps your organization?
What Are the Risks Associated with “Shadow AI”?
Risks abound when AI is involved, including privacy and security risks, non-compliance in regulated sectors, data and intellectual property (IP) leakage, IP infringement, discrimination and algorithmic bias, and operational risks, such as inconsistent data processing, loss of trust and reputation, lack of oversight and accountability, and cost implications, including those associated with remediation efforts and legal penalties. When “Shadow AI” is involved, organizations cannot identify or appropriately address these risks because they are unaware that their organization is facing them. So perhaps the greatest risk is the lack of awareness among employees about the risks associated with AI and the importance of adhering to established governance protocols.
Addressing the Risks:
Best practices require robust governance tailored to your organization's needs, particularly its data, operations, and use-case for its AI tools and systems. Where very sensitive data is involved—for example, in benefits administration matters—it may be advisable to restrict the use of AI tools altogether, or otherwise ensure that their use in that context adheres to the highest data security and confidentiality standards. Further, AI users must be trained to understand that the risk of errors, biases, or unfair outcomes makes relying on such tools for decision-making dicey, if not illegal. Generally, AI policies must balance the advantages of these tools—such as enhanced employee productivity and innovation—against the potential risks they pose in the context in which they are used.
Our Thoughts:
Employees may be reluctant to disclose their use of AI, fearing disciplinary action. To foster transparency, consider offering amnesty (in return for complete disclosure) as part of a broad initiative focused on training and awareness to encourage openness and compliance. Employees should understand that using these tools requires the organization's due diligence and appropriate contracting.
Organizations must conduct diligence to ensure reputable vendors and secure and effective tools, and then contract appropriately to set terms for access to data, prevent misuse of data, define data and IP ownership rights, specify data protection and security measures, ensure compliance and audit rights, outline conditions for contract termination, and address liability and indemnification. All of these efforts run parallel to data mapping. If an organization doesn’t know where, how, and under what terms its data is being processed, it can’t illuminate its operations by accurately mapping that data, or maintaining a complete inventory of its tools and systems (hence, meeting the description of “Shadow AI”).
By providing training, encouraging openness through initiatives like amnesty, and implementing robust contractual terms, organizations can effectively mitigate risks, protect sensitive data, and harness the full potential of AI responsibly and ethically. This approach not only safeguards the organization’s interests, but also promotes trust and accountability among employees, stakeholders, and customers.
--
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.