Enacted May 19, 2025 (Public Law 119-12). Criminal provisions effective on signing; platform compliance provisions effective May 19, 2026. Covers authentic and AI-generated NCII.
What this means
Platforms need takedown procedures, notice handling, and FTC-aligned compliance. FTC jurisdiction includes nonprofits for this Act.
Commerce commentary has highlighted implementation burdens in certain state AI statutes. It does not displace enacted state requirements.
What this means
Federal commentary does not remove state obligations. Multi-state operators should keep state-by-state controls in place while monitoring federal developments.
Task Force coordinates litigation strategy on AI-related matters. Executive orders cannot preempt state law; as of April 2026 no lawsuits challenging state AI statutes have been filed.
What this means
Monitor federal litigation filings and Commerce Department commentary on state AI laws, but keep state-by-state compliance programs in place.
Established federal AI safety, security, and rights priorities across agencies. Revoked by Executive Order 14148 on January 20, 2025. NIST AI RMF and GenAI Profile developed under the order persist as voluntary frameworks.
What this means
Binding EO requirements are gone, but voluntary NIST materials and state laws referencing them still shape compliance.
EEOC guidance clarifies that existing federal anti-discrimination laws apply to AI-assisted hiring and employment decisions. Employers remain responsible for adverse impact even when tools are procured from vendors.
What this means
Teams using AI in hiring, promotion, or termination should conduct bias testing and retain supporting records. Vendor contracts do not transfer liability away from the employer.
The FTC explains that AI marketing and product claims must be truthful and supported by evidence. It warns against overstatements, hidden limitations, and unfair algorithmic practices.
What this means
If your company markets AI capabilities, you need claim substantiation and internal controls around model performance statements. Existing FTC authority applies even without AI-specific federal statutes.
National Institute of Standards and TechnologyFrameworkActive
A voluntary framework for managing AI risk across governance, mapping, measurement, and risk treatment. It is widely used by legal, policy, and engineering teams as a baseline operating model.
What this means
NIST AI RMF alignment helps demonstrate reasonable controls and can support state-law defenses where statutes reference NIST-style practices. Many teams treat it as the most practical cross-jurisdiction baseline.
The FTC has repeatedly stated it will use existing unfair and deceptive practices authority to pursue harmful AI uses. This includes biased outcomes, dark patterns, and unsupported model claims.
What this means
Consumer-facing AI systems should be reviewed for fairness, transparency, and claim accuracy before launch. Enforcement can include monetary penalties and ongoing compliance monitoring.