The Shifting Landscape of AI Security
February 13, 2025
Privacy Plus+
Privacy, Technology and Perspective
This week, let’s look down into the inky void of AI security in America.
Background
For several years now, the dangers inherent in frantic, hell-for-leather AI development have been glaringly apparent. In October 2023, Executive Order 14110 established a comprehensive framework for AI governance, with several key provisions:
· mandatory red-team testing for of any foundation models that “would pose a serious risk to national security, national economic security, or national public health and safety,” per “rigorous” standards set by the National Institute of Standards and Technology (NIST);
· Disclosure of red-team testing results;
· Federal notification requirements to for new model training; and
· Specific safeguards against:
o AI-assisted development of dangerous biological materials
o Unauthorized or undetected AI-generated content
You can read EO 14110 by clicking on the following link:
Recent Developments
The regulatory landscape has shifted dramatically in 2025. On January 23, 2025, the current administration rescinded EO 14110 by an executive order, which is available by clicking on the following link:
The decision to rescind EO 14110 represents a fundamental change in U.S. policy toward AI oversight. At the recent AI Action Summit in Paris, the administration specifically articulated its position that existing safety protocols could impede American AI development, expressly challenging:
· The European Union's Digital Services Act;
· The EU’s General Data Protection Regulation (GDPR); and
· International regulatory frameworks affecting U.S. technology companies.
You can read specifics about the administration’s comments by clicking the following link:
https://time.com/7221099/jd-vance-ai-paris-summit/
Our Thoughts
At least three primary concerns emerge from these policy shifts:
First, the removal of AI safety protocols is an epochally horrible move. It creates regulatory gaps at a crucial moment in technological development. The absence of standardized testing and reporting requirements complicate liability assessments and risk management for both developers and users.
Second, the stance on information regulation raises significant questions about the intersection of free speech, consumer protection, and technological oversight. Normalizing misinformation and disinformation is dangerous. The legal framework for addressing AI-generated content requires careful balancing of innovation, security, and public interest.
Third, tensions over international regulations threaten to destabilize cross-border commerce—digital and otherwise. The EU's GDPR and similar frameworks create binding obligations that all companies must follow to operate globally. Dismissing these frameworks as "onerous rules" ignores both their fundamental role in global commerce and the significant investments U.S. companies have already made in compliant infrastructure. Moreover, challenging our allies' sovereign right to protect their citizens risks diplomatic fallout and potential market access restrictions for U.S. technology firms.
In short, racing ahead without safeguards or allies serves neither innovation nor security. The foundations of AI leadership must be built on safety, truth, and trust. True leadership in AI demands more than speed—it requires wisdom.
---
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.