Breaking — March 20, 2026White House releases 4-page AI blueprint for Congress. The most consequential piece: federal government wants to strip states of the power to regulate AI. Here's what that means for you.
HomeBreaking Bills › White House AI Framework
Breaking — Federal AI Policy
White House National Policy Framework for Artificial Intelligence · March 20, 2026

The White House Just Told
Congress How to Govern AI.
Here's What It Actually Says.

On March 20, 2026, the Trump White House released a 4-page document telling Congress what federal AI law should look like. It is not a law. It does not bind anyone yet. But it is the clearest signal of where federal AI policy is headed — and one provision could wipe out every state AI protection law in the country.

Released: March 20, 2026 · Source: whitehouse.gov · 4 pages · Not binding — legislative recommendations only · Fulfills Dec. 2025 Executive Order 14365
⚠️
This is not a law. This document is a White House “wish list” for Congress — a set of recommendations, not enforceable rules. Nothing in it takes effect automatically. Congress would have to pass legislation based on these recommendations for any of it to become law. That said: this is the official position of the Trump administration, and it will shape every AI bill introduced in Congress from here forward. The preemption provision has already failed twice in Congress — but the White House keeps pushing.
Document type
Legislative recommendations — not binding, not law
Fulfills
Executive Order 14365, Dec. 11, 2025 — directed preparation of a federal AI framework
Preemption status
Failed to pass twice in Congress — removed from One Big Beautiful Bill and NDAA
Plain English First
What Is This and Why Should You Care?

Right now, AI is largely unregulated at the federal level. A handful of states — Colorado, California, New York, Utah — have started passing their own AI laws: rules about transparency, bias, algorithmic accountability, and protecting people from automated decisions that affect their jobs, housing, or health care.

The AI industry hates this. They don't want 50 different sets of rules. They want one federal rule — ideally a weak one — that overrides everything the states have done. That is exactly what this White House document is asking Congress to create.

The document has seven parts. Six of them are mostly fine — child protection, anti-fraud, copyright, free speech, workforce training, innovation. Many have bipartisan support. The seventh part is the explosive one: a proposal to strip states of the power to regulate AI development and override every state AI law that Congress decides is “too burdensome.”

There is also a significant gap: the document says nothing about civil rights. Nothing about AI bias. Nothing about accountability when an AI system makes a wrong decision that costs you your job, your loan application, or your asylum claim. The framework protects AI companies from liability. It does not protect you from AI.

The Document
All Seven Pillars — Broken Down
What the White House says, what it actually means in plain English, and what experts and lawmakers are saying about each one.
01
Protecting Children and Empowering Parents
Age verification, parental controls, deepfake protections, Take It Down Act
Mostly Good
What the White House says
Require AI platforms likely accessed by minors to implement age assurance, parental controls, and protections against sexual exploitation and self-harm. Confirm existing child privacy laws apply to AI. Build on the Take It Down Act — a bill championed by First Lady Melania Trump requiring platforms to remove nonconsensual deepfakes including of minors.
What it means for you
Age verification requirements on AI platforms. More parental controls over what AI services kids can access. Federal protection against deepfake abuse images, including AI-generated CSAM. The Take It Down Act passed in 2025 and takes effect May 2026 — this framework builds on it. This section has broad bipartisan support.
States keep the power to enforce their own child protection laws — even where AI generates the content
Existing child privacy law (COPPA) explicitly confirmed to apply to AI systems
Warns against "ambiguous standards" and "open-ended liability" that could generate litigation — language that protects platforms more than kids
02
Safeguarding American Communities
Data center energy costs, AI fraud, small business grants, national security
Mixed
03
Respecting Intellectual Property and Supporting Creators
AI training on copyrighted work, digital replicas, collective licensing
Complicated
04
Preventing Censorship and Protecting Free Speech
Government coercion of AI platforms, redress for censorship — but only by government
Watch Closely
05
Enabling Innovation and Ensuring AI Dominance
Regulatory sandboxes, no new federal AI regulator, sector-specific oversight
Industry Win
06
Educating Americans and Developing an AI-Ready Workforce
Non-regulatory training programs, land-grant institutions, job trend studies
Thin
07
Establishing a Federal Framework — Preempting State AI Laws
The most controversial provision — the one that could override your state's AI protections
Most Controversial
The Most Important Thing on This Page
What State Preemption Actually Means — and Why It Keeps Failing
The federal government has the power to preempt — override — state laws when Congress acts under the Commerce Clause. The White House wants Congress to use that power to wipe out state AI laws deemed “too burdensome.” This is the part that affects you most directly, and it has failed to pass twice already.
States Would Keep
General consumer protection laws applied to AI users and developers
Child safety and child sexual abuse material laws
Anti-fraud laws — including AI-enabled scams
Zoning authority over data center locations
Rules about how their own government uses AI — police, education, public services
States Would Lose
Any law regulating AI development itself — the White House calls this “inherently interstate”
Laws requiring algorithmic impact assessments or bias testing
Laws imposing transparency or explainability requirements on AI companies
Laws holding AI developers liable for how third parties misuse their models
Any law deemed an “undue burden” — a standard the federal government defines
This Has Already Failed Twice in Congress
Federal preemption of state AI laws was included in the One Big Beautiful Bill reconciliation package — then removed under pressure. It was also proposed for the National Defense Authorization Act (NDAA) — and never made it in. Congress has twice declined to preempt state AI laws. The White House is asking again. States, governors, and state attorneys general are expected to fight it hard. On March 20, 2026 — the same day this framework was released — Rep. Beyer and four other Democrats introduced the GUARDRAILS Act, which would repeal the December 2025 executive order and block the state preemption effort entirely.
What the Framework Does Not Say — The Gaps That Matter
No Civil Rights
The framework contains zero mention of AI bias, algorithmic discrimination, or civil rights. An AI system that denies you a loan, a job, or housing based on race — not addressed. Crowell & Moring's analysis: “It makes no reference to the risks of AI systems' bias, nor does it seek to mitigate that harm through quality or testing requirements.”
No Accountability for Errors
No testing requirements. No performance monitoring. No requirement that AI systems be accurate before they're deployed against consumers. No mention of what happens when an AI makes a decision that harms you and is wrong.
No Worker Protections
No requirement that companies notify workers before deploying AI that displaces them. No income protection for workers replaced by automation. “Study trends” is not a worker protection policy.
No Explanation Right
If an AI system denies your job application, your loan, or your parole — you have no right under this framework to know why or to challenge it. The framework explicitly protects companies from the kind of transparency requirements that would create such a right.
Who Is Pushing Back and What They're Saying
Documented statements from lawmakers and experts — sourced to primary reporting
Rep. Don Beyer (D-VA) + 4 co-sponsors
Introduced GUARDRAILS Act — March 20, 2026
Introduced legislation the same day the framework dropped that would repeal EO 14365 and block the state AI preemption moratorium entirely. Joined by Reps. Matsui, Lieu, Jacobs, and McClain Delaney.
Sen. Maria Cantwell (D-WA)
Senate Commerce Committee Ranking Member
Advocates for a structured approach grounded in standards, testing, and public infrastructure investment — directly at odds with the framework's rejection of new regulatory requirements.
Rep. Yvette Clarke (D-NY) & Sen. Brian Schatz (D-HI)
Members, House Energy & Commerce / Senate Commerce
Raised concerns about federal preemption, lack of accountability and oversight, and the absence of civil rights protections in the framework.
State Governors and AGs
National Governors Association
The NGA's summary of the framework — notably neutral in tone — signals governors are watching closely. State attorneys general are expected to mount strong legal and political opposition to preemption if it advances in Congress.
State AI Laws That Federal Preemption Would Target
These laws exist today and would potentially be overridden if Congress passes the preemption the White House is asking for
Colorado
Colorado AI Act
Requires developers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination. Effective June 30, 2026. Named specifically in the December 2025 EO as a target.
California
Transparency in Frontier AI Act
Requires AI companies to publish safety policies and incident reporting for frontier models. Imposes transparency requirements on large AI developers.
New York
RAISE Act
Requires safety testing and risk assessments for frontier AI models deployed in New York. Passed 2025.
Utah
HB 286
Requires AI systems used in consumer-facing applications to disclose when they are AI. Transparency requirement that could be deemed an "undue burden" on AI development.
Texas
Texas AI Accountability Act (proposed)
Algorithmic accountability bill requiring impact assessments for high-risk AI. Still advancing through state legislature.
Illinois
AEDT & Biometric Laws
Illinois's existing biometric privacy law (BIPA) and Artificial Intelligence Video Interview Act already impose requirements on AI employers — potentially vulnerable under broad preemption language.
Primary Sources — Every Claim on This Page Is Sourced
White House National Policy Framework for Artificial Intelligence — Full 4-page PDF (March 20, 2026) · whitehouse.gov
National Governors Association Summary — March 25, 2026 · nga.org
Crowell & Moring LLP client alert — “Framework's silence on civil rights bears mention” — March 2026
Holland & Knight alert — GUARDRAILS Act introduction and opposition overview — March 2026
Ropes & Gray alert — State AI preemption legal analysis — March 2026
ZwillGen analysis — “The framework's likely value is less in immediate legal effect and more in framing and agenda signaling” — March 27, 2026
Governing.com — “Preemption has failed to pass twice this Congress” — March 25, 2026
Freshfields advisory — Preemption would prohibit states from regulating AI development and imposing third-party liability — March 2026
Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence — December 11, 2025