Requires covered platforms to remove reported nonconsensual intimate imagery, including AI-generated synthetic content, within statutory deadlines after valid notice.
What this means
Platforms with user-generated content need operational takedown procedures, escalation channels, and record-keeping around response timelines.
Commerce commentary has highlighted implementation burdens in certain state AI statutes. It does not displace enacted state requirements.
What this means
Federal commentary does not remove state obligations. Multi-state operators should keep state-by-state controls in place while monitoring federal developments.
Federal priorities include coordinated AI-related investigations and litigation strategy across civil-rights, consumer, and fraud contexts. Public statements indicate a stronger whole-of-government enforcement posture.
What this means
Organizations should expect increased scrutiny of AI governance records, testing evidence, and incident response controls during investigations.
Establishes federal policy priorities for AI safety, security, and rights protections across agencies. Directs agencies to issue additional standards, procurement rules, and risk controls.
What this means
This order sets federal direction rather than direct private company duties in most cases. The practical impact comes from agency guidance and procurement rules that follow.
EEOC guidance clarifies that existing federal anti-discrimination laws apply to AI-assisted hiring and employment decisions. Employers remain responsible for adverse impact even when tools are procured from vendors.
What this means
Teams using AI in hiring, promotion, or termination should conduct bias testing and retain supporting records. Vendor contracts do not transfer liability away from the employer.
The FTC explains that AI marketing and product claims must be truthful and supported by evidence. It warns against overstatements, hidden limitations, and unfair algorithmic practices.
What this means
If your company markets AI capabilities, you need claim substantiation and internal controls around model performance statements. Existing FTC authority applies even without AI-specific federal statutes.
National Institute of Standards and TechnologyFrameworkActive
A voluntary framework for managing AI risk across governance, mapping, measurement, and risk treatment. It is widely used by legal, policy, and engineering teams as a baseline operating model.
What this means
NIST AI RMF alignment helps demonstrate reasonable controls and can support state-law defenses where statutes reference NIST-style practices. Many teams treat it as the most practical cross-jurisdiction baseline.
The FTC has repeatedly stated it will use existing unfair and deceptive practices authority to pursue harmful AI uses. This includes biased outcomes, dark patterns, and unsupported model claims.
What this means
Consumer-facing AI systems should be reviewed for fairness, transparency, and claim accuracy before launch. Enforcement can include monetary penalties and ongoing compliance monitoring.