California Transparency in Frontier AI Act (SB 53)

CAFor companies building AIHigh severityIn effect

In effect since January 1, 2026

Overview

Requires developers of frontier AI models trained above the compute threshold set in statute (very large scale) to publish safety frameworks, report critical safety incidents to the Office of Emergency Services, and implement whistleblower protections. Large frontier developers with substantial revenue face enhanced duties.

This is an AI-specific state law.

Who this applies to

This regulation applies to companies that build, develop, or sell AI tools, models, or systems. If your company creates AI products that other businesses or consumers use, this regulation may apply to you.

AI categories covered

  • General purpose AI

What this requires you to do

  • Risk management program required

    Implement a risk management program. Maintain ongoing processes to identify, assess, and mitigate AI-related risks.

  • Transparency notice required

    Provide transparency notices. Inform affected individuals that AI is being used and how it influences decisions.

  • Record-keeping required

    Maintain records. Keep documentation of your AI systems, decisions made, and compliance activities.

  • Disclosure to users required

    Disclose AI use. Make it clear to users when they are interacting with AI-generated content or AI-driven systems.

Enforcement and penalties

Civil penalties up to $1 million per violation for large frontier developers. Enforced by California AG. Whistleblower protections with civil action rights for employees.

Source

Read the full text

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53

Always verify current language and amendments at the official source.

Other California regulations

Explore more rules in the same jurisdiction that may apply to your AI systems.

Want to know what else applies to your company?

Run a free XIRA scan to see all regulations that match your states and AI tools.