AI Regulations Tracker

Every AI and algorithmic decision-making regulation XIRA tracks, organized by state. Updated as new laws are signed.

Tracking 99 regulations across 24 states and jurisdictions. Every entry verified against enrolled statute text. Last updated April 9, 2026.

A

  • ADMT (Automated Decision-Making Technology)

    A term used in California CCPA and CPRA regulations to describe technology that processes personal information to make decisions replacing human decision-making, or that substantially facilitates human decision-making. Broader than AI in scope.

  • Algorithmic discrimination

    Differential treatment or impact caused by an automated decision-making system that results in unlawful discrimination based on protected characteristics such as race, gender, age, or disability.

  • Algorithmic pricing

    The use of algorithms to dynamically set or adjust prices based on customer data, demand signals, or behavioral patterns. New York S3008 requires disclosure when algorithmic pricing uses protected class data.

  • Automated decision-making (ADM)

    Any process where a computational system makes or substantially contributes to a decision affecting a person, with limited or no human involvement at the point of decision.

B

  • Bias audit

    An independent evaluation of an automated system to assess whether it produces disparate impact on protected groups. Required annually under NYC Local Law 144 for covered automated employment decision tools.

C

  • Compliance complexity

    A measure of how operationally difficult a regulation is to implement, independent of penalty severity. A law can carry high penalties but be simple to comply with, for example adding a disclosure label, or moderate penalties but require months of cross-departmental work, such as impact assessments, vendor documentation, and monitoring programs.

  • Consequential decision

    A decision that materially affects a person's access to employment, housing, credit, insurance, education, or other significant opportunities. Many AI regulations apply only to AI used in consequential decisions.

  • Cure period

    A window of time after a violation is identified during which a company can fix the problem before penalties apply. Many state privacy laws include cure periods of 30 to 60 days, but several are sunsetting. Connecticut expired in December 2024 and New Jersey expires in July 2026. Once a cure period expires, regulators can pursue penalties immediately.

D

  • Deployer

    A company that uses an AI system built by someone else. Distinguished from a developer (who builds the AI). Most state AI laws impose different obligations on deployers versus developers.

  • Developer

    A company that builds and distributes an AI system for others to use. Developers typically have model documentation, bias testing disclosure, and customer notification obligations.

F

  • Federal preemption

    The principle that federal law can override state law in the same area. A federal AI law or executive action could preempt state AI regulations, making them unenforceable. As of April 2026, no federal AI preemption has been enacted, but a House-passed provision proposes a 10-year moratorium on state AI laws.

  • Frontier AI model

    An AI model trained at or above a very high compute threshold, as defined in California SB 53. Generally refers to the largest foundation models, GPT-4 class and above. Only about 5 to 8 companies currently operate at this scale.

H

  • High-risk AI system

    An AI system used to make or substantially assist in consequential decisions about people. The specific definition varies by jurisdiction, but generally includes AI used in hiring, lending, insurance, housing, and criminal justice.

I

  • Impact assessment

    A structured document evaluating the risks and benefits of deploying an AI system. Typically covers data inputs, decision outputs, affected populations, potential harms, and mitigation measures.

M

  • Model card

    A standardized document describing an AI model's purpose, performance metrics, training data, known limitations, and intended use cases. Originated from a 2019 Google research paper and now referenced in several state laws.

O

  • Opt-out mechanism

    A process allowing individuals to decline AI-driven decision-making and request human review instead. Required by several privacy laws with automated decision-making provisions.

P

  • Private right of action

    A provision allowing individuals, not just government agencies, to sue companies directly for violations. Laws with private right of action carry significantly higher risk because any affected person can file a lawsuit, often with statutory damages. Illinois BIPA, California SB 243, and Tennessee ELVIS Act include private rights of action.

  • Profiling

    Any form of automated processing that evaluates personal aspects of an individual, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location, or movements.

  • Protected class

    A group of people legally protected from discrimination. Includes race, color, national origin, sex, religion, age, disability, and other characteristics. AI impact assessments must evaluate effects on all applicable protected classes.

R

  • Rebuttable presumption

    A legal principle where compliance with specified requirements creates a presumption that the company acted with reasonable care. Under Colorado SB 24-205, following prescribed compliance steps creates this presumption as a legal defense.

  • Right to human review

    The right of an individual affected by an AI-driven decision to have that decision reviewed by a human with authority to override the AI's output.

S

  • Safe harbor

    A legal provision that protects companies from liability if they follow specified practices. Colorado AI Act offers a rebuttable presumption of reasonable care for companies that comply with recognized risk management frameworks such as NIST AI RMF. Safe harbors reduce but do not eliminate legal risk.

  • Shadow AI

    AI tools used within a company without the knowledge or approval of IT, compliance, or management. Poses compliance risk because untracked AI systems may trigger regulatory obligations the company is unaware of.

  • Solely automated

    A legal distinction in how automated decision-making laws define their scope. Some laws, including Oregon, Delaware, and Virginia, only trigger obligations for decisions made solely by automation with no human involvement. Others, including Colorado, Connecticut, and Montana, trigger obligations for any decision where automation plays a role, even with human review. This distinction determines whether adding human review satisfies the law.

T

  • Transparency notice

    A disclosure informing individuals about the use of AI in decisions affecting them. Content requirements vary by jurisdiction but typically include what the AI does, what data it uses, and how to opt out.

U

  • UOOM (Universal Opt-Out Mechanism)

    A technical signal, such as Global Privacy Control, that consumers can enable once in their browser to opt out of data processing across websites. Some states require companies to recognize UOOM signals for profiling opt-outs, including Colorado, Montana, Connecticut, New Jersey, Texas, and California. Others do not, including Virginia, Kentucky, and Indiana. Recognizing UOOM is a technical implementation requirement, not just a policy decision.

Find which regulations apply to you.

Start your free scan

Free. No account required.

or

Get monthly regulatory updates