Navigating the New Reality of AI in Political Advertising

November 9, 2023

Privacy Plus+

Privacy, Technology and Perspective

This week, let’s focus on Meta’s latest announcement about the use of Artificial Intelligence (“AI”) in online political advertising, paralleling the recent executive order on AI.

Meta's Transparency Move

As Election Day 2024 gets closer, Meta has set a new precedent: Advertisers must now disclose any AI-generated content in political advertisements on Facebook and Instagram. This move aims to tackle the growing problem of online disinformation that's been a serious issue since the 2016 elections. Starting in 2024, Meta will ban the use of its AI technology for making ads about political, social issues, and several regulated sectors (housing, employment, credit, health, pharmaceuticals, and financial services). Advertisers can still use AI from other companies for their ads, but they must openly state this when they submit the ad to Meta, which will then add this information to the ad itself. You can read more about this development by clicking on the following link:

https://www.nytimes.com/2023/11/08/technology/meta-political-ads-artificial-intelligence.html

The Evolution and Impact of Deepfakes

Disinformation techniques have progressed in sophistication from micro-targeting incredibly small groups and even specific people; to the creation of “avatars” of faces who seemingly reflect the target’s demographics; to using language patterns, dialects, and idioms with whom the target would most identify.  The latest concern is AI's ability to produce deepfake videos and audio cheaply, making politicians appear to say or do things they have not, which represents a significant escalation in the sophistication of online disinformation tactics. Want to hear an example? Here’s an AI-generated example of the late, great Johnny Cash singing a Taylor Swift song: 

https://twitter.com/buccocapital/status/1719482652577644722?s=46&t=Z2TX8jYh399v4j6SPAmIgQ

Deepfakes generally call into question the trust we place in our judgment based on our senses. Historically, seeing and hearing someone speak has been a marker of authenticity. With deepfakes, the reliability of these sensory experiences is fundamentally compromised, leaving us in a precarious position where digital fakes challenge our perception of reality. The age-old protection of our own eyes and ears is gone. The old joke, “Who’re you going to trust – me, or your lying eyes?” suddenly isn’t funny anymore.

Presidential Action on AI

Addressing these concerns, President Biden recently enacted the "Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence," which outlines steps towards maintaining the integrity of AI applications. The Executive Order does not directly establish requirements for the disclosure of AI in advertising. However, it does mandate the development of standards and best practices for detecting AI-generated content and authenticating official content, tasking the Department of Commerce with developing guidance for content authentication and watermarking to label AI-generated content clearly.

You can read a Fact Sheet about the Order by clicking on the following link:

https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

Our Thoughts: Beyond Disclosure: Seeking Effective Solutions

Disclosure alone may not be sufficient. The complexity of privacy notices has led to skepticism of the "Notice/Disclosure/Consent" framework. A simple AI-generated tag on an ad hardly addresses the underlying issue of potential deception.

Considering the misleading potential of AI, especially in political ads, the solution may need to be more robust. We’re no longer sure of the First Amendment adage: “The cure for lying speech is more truthful speech.” In an era where deepfakes can corrode the very foundation of our democracy, we’d prefer to outlaw AI-generated misleading and deceptive political ads altogether.

At a minimum, political ads could benefit from prominent warning labels and comprehensive risk statements, ensuring the public is fully aware of the content's origins and potential to mislead.  For example:

 

“DANGER,” “WARNING,” “CAUTION”

“Contains Misleading Material – Created by AI”

“Click on the Link to see Further Precautions”

(if more explanation is needed)

“KEEP OUT OF REACH OF CHILDREN”

(if appropriate)

Previous
Previous

Streamlining Cybersecurity: DHS's Recommendations for Cyber Incident Reporting. 

Next
Next

Could the SEC's Fraud Charges Against SolarWinds and its CISO Reshape Cybersecurity Oversight?