Federal landscape

Federal AI Guidance and Enforcement

Executive orders, agency guidance, enforcement actions, and proposed legislation affecting AI compliance.

Key takeaways

  • Federal preemption has not passed. State laws remain in effect.
  • NIST AI RMF is the closest thing to a universal operating baseline.
  • FTC and EEOC continue enforcing existing law against AI practices.

Filters

CongressLegislationPending

Requires covered platforms to remove reported nonconsensual intimate imagery, including AI-generated synthetic content, within statutory deadlines after valid notice.

What this means

Platforms with user-generated content need operational takedown procedures, escalation channels, and record-keeping around response timelines.

View source

Department of CommerceAgency guidanceActive

Commerce commentary has highlighted implementation burdens in certain state AI statutes. It does not displace enacted state requirements.

What this means

Federal commentary does not remove state obligations. Multi-state operators should keep state-by-state controls in place while monitoring federal developments.

View source

Department of JusticeRulemakingActive

Federal priorities include coordinated AI-related investigations and litigation strategy across civil-rights, consumer, and fraud contexts. Public statements indicate a stronger whole-of-government enforcement posture.

What this means

Organizations should expect increased scrutiny of AI governance records, testing evidence, and incident response controls during investigations.

View source

Establishes federal policy priorities for AI safety, security, and rights protections across agencies. Directs agencies to issue additional standards, procurement rules, and risk controls.

What this means

This order sets federal direction rather than direct private company duties in most cases. The practical impact comes from agency guidance and procurement rules that follow.

View source

Equal Employment Opportunity CommissionAgency guidanceActive

EEOC guidance clarifies that existing federal anti-discrimination laws apply to AI-assisted hiring and employment decisions. Employers remain responsible for adverse impact even when tools are procured from vendors.

What this means

Teams using AI in hiring, promotion, or termination should conduct bias testing and retain supporting records. Vendor contracts do not transfer liability away from the employer.

View source

Federal Trade CommissionAgency guidanceActive

The FTC explains that AI marketing and product claims must be truthful and supported by evidence. It warns against overstatements, hidden limitations, and unfair algorithmic practices.

What this means

If your company markets AI capabilities, you need claim substantiation and internal controls around model performance statements. Existing FTC authority applies even without AI-specific federal statutes.

View source

National Institute of Standards and TechnologyFrameworkActive

A voluntary framework for managing AI risk across governance, mapping, measurement, and risk treatment. It is widely used by legal, policy, and engineering teams as a baseline operating model.

What this means

NIST AI RMF alignment helps demonstrate reasonable controls and can support state-law defenses where statutes reference NIST-style practices. Many teams treat it as the most practical cross-jurisdiction baseline.

View source

Federal Trade CommissionEnforcementActive

The FTC has repeatedly stated it will use existing unfair and deceptive practices authority to pursue harmful AI uses. This includes biased outcomes, dark patterns, and unsupported model claims.

What this means

Consumer-facing AI systems should be reviewed for fairness, transparency, and claim accuracy before launch. Enforcement can include monetary penalties and ongoing compliance monitoring.

View source

See which federal and state regulations apply to your company

Run the free scan to map your tools and jurisdictions.

Start your free scan