Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Headline: Federal Government Orders Broad Safety and Oversight for Artificial Intelligence
What it does: Agencies must develop and implement standards, testing, reporting, and governance to ensure AI systems are safe, secure, and respect privacy, civil rights, and critical infrastructure.
- Companies must report training of large or risky AI models and powerful computing clusters.
- Agencies must adopt AI risk-management guidance and designate Chief AI Officers and testing programs.
- New testing and oversight will target AI uses in healthcare, critical infrastructure, and biosecurity.
Summary
This order directs the Federal Government to govern AI development and use so systems are safe, secure, and trustworthy. It requires agencies to create standards, testing methods, and guidance for generative and dual-use AI, and to build AI testbeds and evaluation tools.
It requires companies and cloud providers to report large-model training and powerful computing clusters, and directs agencies to protect privacy, civil rights, consumers, and critical infrastructure. The order also pushes recruitment of AI talent and international engagement.
The goal is to harness AI benefits while reducing risks to security, health, jobs, and civil liberties.
Ask about this order
Ask questions about this executive order and its implications.
What agencies are affected by this order?
How does this order change existing policy?
What are the practical implications of this order?