AI Regulations Tracker
Every AI and automated decision-making regulation XIRA tracks, organized by state. Every entry verified against enrolled statute text.
Category
Status
Regulation catalog
Database snapshot. 99 regulations, 24 jurisdictions.
Colorado
(2 tracked)
- ColoradoUpcoming
Colorado AI Act (SB 24-205)
Colorado's AI Act may apply to companies doing business in Colorado that develop or use high-risk AI systems for consequential decisions (employment, lending, insurance, housing, education, healthcare, legal services, government services). Where applicable, the law calls for reasonable care against algorithmic discrimination. Deployers may need to complete impact assessments, implement risk management programs, and notify consumers before and after adverse AI decisions. Developers may need to supply documentation to deployers.
- ColoradoIn Effect
Colorado Privacy Act (CPA profiling and ADM)
Colorado profiling rules define three tiers: Solely Automated Processing, Human Reviewed Automated Processing, and Human Involved Automated Processing. Consumer opt-out applies to tiers 1 and 2. Tier 3 gets enhanced disclosure instead of opt-out. Core profiling provisions effective July 1, 2023. Universal opt-out mechanism compliance required since July 1, 2024. The 60-day cure period for CPA violations expired January 1, 2025, so the Attorney General may pursue enforcement without a cure window (distinct from the Colorado AI Act cure rule). HB 24-1130 added biometric consent requirements effective July 1, 2025. SB 24-041 added minor protections effective October 1, 2025.
California
(22 tracked)
- CaliforniaUpcoming
CCPA/CPRA Automated Decision-Making Technology Regulations
California's ADMT regulations require businesses using automated decisionmaking technology for significant decisions (employment, finance, housing, education, healthcare) to provide pre-use notices, offer opt-out rights, respond to access requests, and conduct risk assessments with annual CPPA filing under penalty of perjury.
- CaliforniaIn Effect
California Transparency in Frontier AI Act (SB 53)
Requires developers of frontier AI models trained above the statutory compute threshold (10^26 FLOPs) to publish safety frameworks, report critical safety incidents to the Office of Emergency Services, and implement whistleblower protections. Also reaches large frontier developers with annual revenues over $500 million. Replaces the vetoed SB 1047 with a narrower transparency approach. Currently applies to approximately five to eight companies worldwide given the FLOP threshold. Includes a federal deference provision: compliance with comparable federal standards, including the EU AI Act, is accepted where the statute allows.
- CaliforniaIn Effect
California FEHA regulations on automated decision systems (Civil Rights Council)
California Civil Rights Council regulations apply FEHA's anti-discrimination framework to automated decision systems (ADS) in employment. Defines ADS broadly to include AI, ML, and algorithmic tools. Makes anti-bias testing evidence relevant to discrimination claims and defenses. Requires reasonable accommodation when ADS disadvantages disabled or religious individuals. Prohibits pre-offer medical inquiries via ADS. Employers with 5+ employees are covered. 4-year record retention required.
- CaliforniaIn Effect
California Companion Chatbots Act (SB 243)
California's Companion Chatbot Act may apply to operators of AI chatbots designed for ongoing social interaction. Where applicable, operators may need to disclose the AI nature of the chatbot, maintain safety protocols for self-harm and suicide content, provide crisis referrals, and implement special protections for minors including break reminders and content restrictions. Operators may need to publish safety protocols and file annual reports with the Office of Suicide Prevention starting July 2027.
- CaliforniaIn Effect
California AI-Generated Child Sexual Abuse Material (AB 1831)
Expands child pornography laws to include content digitally altered or generated by AI. Criminal prohibition.
- CaliforniaIn Effect
California Deepfake Pornography Expansion (AB 621)
Expands civil remedies for non-consensual deepfake pornography. Broadens definitions, adds liability for deepfake pornography service operators, and provides up to $250,000 for malicious violations. Minors cannot consent to creation or distribution.
Illinois
(8 tracked)
- IllinoisIn Effect
Illinois Biometric Information Privacy Act (BIPA)
Illinois BIPA may apply to written consent before collecting fingerprints, facial scans, voiceprints, iris scans, or hand geometry. Where applicable, companies may need to publish a retention/destruction policy, provide written notice, obtain written releases, and may be barred from selling or profiting from biometric data. Any aggrieved person can sue for $1,000 to $5,000 per violation without proving harm.
- IllinoisIn Effect
Illinois AI-Generated Child Sexual Abuse Material (HB 4623)
Clarifies that Illinois child pornography laws encompass AI-generated images of minors in sexual acts. AI-generated CSAM treated identically to non-AI CSAM under existing criminal statutes.
- IllinoisIn Effect
Illinois Human Rights Act (HB 3773 AI amendment)
Illinois HB 3773 amends the Illinois Human Rights Act to prohibit employers from using AI that has the effect of subjecting employees to discrimination on the basis of protected classes, including using zip codes as proxies. Where applicable, employers may need to notify employees and applicants when AI is used in employment decisions. IDHR draft implementing rules circulated December 2025. No safe harbors or affirmative defenses.
- IllinoisIn Effect
Illinois Right of Publicity Act, Digital Replica Amendment (HB 4875)
Prohibits unauthorized AI-generated digital replicas of individual voices, images, and likenesses. Holds liable anyone who distributes, transmits, or materially contributes to violations. Not contingent on commercial purpose.
- IllinoisIn Effect
Illinois HB 1806 / WOPRA - AI in Mental Health Therapy
Restricts AI use in mental health therapy contexts under Illinois WOPRA and related professional standards. Targets AI chatbot platforms marketed as mental health tools. Verification of enacted text details (including signed status, final effective date, and penalty structure) remains pending against the enrolled primary source.
- IllinoisIn Effect
Illinois Digital Forgeries Act (HB 2123)
Extends nonconsensual intimate image protections to AI-generated deepfakes. Provides civil remedies including statutory and punitive damages for victims of sexually altered digital images.
Arkansas
(3 tracked)
- ArkansasIn Effect
Arkansas AI-Generated Child Exploitation (HB 1877)
Criminal prohibition on creating, possessing, or distributing AI-generated imagery indistinguishable from real minors in sexual situations.
- ArkansasIn Effect
Arkansas Nonconsensual Synthetic Intimate Content (HB 1529)
Criminal penalties for distributing synthetic intimate content without consent, including AI-generated deepfakes.
- ArkansasIn Effect
Arkansas Generative AI Content Ownership Act (HB 1876, Act 927)
First-of-its-kind AI content ownership law. Establishes default rules: prompt/data providers own resulting content if input is legally obtained and ownership not transferred by contract. Employer owns content when employee uses AI within scope of employment under employer's direction. Ownership does not extend to content violating pre-existing IP rights. Federal copyright preemption risk: US Copyright Office holds AI-generated works without meaningful human authorship are not copyrightable. Contractual arrangements can override the default ownership rules.
Connecticut
(2 tracked)
- ConnecticutUpcoming
Connecticut Public Act 25-113 (SB 1295) CTDPA and profiling amendments
Connecticut's omnibus bill dramatically expands the CTDPA. Lowers the applicability threshold from 100,000 to 35,000 consumers. Adds LLM training data disclosure (first in nation). Creates a new profiling impact assessment requirement separate from existing DPIAs. Adds consumer rights to question, explain, review, and correct profiling decisions. Establishes a voluntary bias auditing safe harbor. Expands sensitive data to include neural and financial data. Adds minors' data sale prohibition. Cure period expired December 31, 2024.
- ConnecticutIn Effect
Connecticut Government AI Procurement and Oversight (SB 1103)
First-in-nation state government AI procurement law. Requires state agencies to inventory AI systems, conduct impact assessments, prohibit discriminatory AI, and publicly post inventories. Applies to state agencies and contractors, not private sector employers.
Indiana
(2 tracked)
- IndianaIn Effect
Indiana Consumer Data Protection Act - Profiling Provisions
Grants Indiana consumers the right to opt out of profiling for decisions with legal or significant effects. Applies to entities that control or process personal data of 100,000+ Indiana consumers, or 25,000+ consumers with 50%+ revenue from data sales. Permanent 30-day cure period (no sunset). No private right of action. One of the most business-friendly state privacy laws.
- IndianaIn Effect
Indiana Election Deepfake Disclosure (HB 1133)
Requires disclosure when AI-generated synthetic media is used in political campaign communications in Indiana.
Iowa
(2 tracked)
- IowaIn Effect
Iowa AI-Generated Child Exploitation (SF 2243)
Treats AI-generated depictions of child exploitation equivalently to real CSAM under Iowa criminal code.
- IowaIn Effect
Iowa Nonconsensual Synthetic Intimate Content (HF 2240)
Criminal penalties for distributing nonconsensual synthetic intimate content including AI-generated deepfakes.
Maryland
(5 tracked)
- MarylandIn Effect
Maryland Online Data Privacy Act (MODPA), ADM and profiling provisions
Statute effective October 1, 2025, but enforcement does not begin until April 1, 2026. The law does not apply to processing activities before April 1, 2026. Considered one of the strongest state privacy laws due to strict data minimization and a complete ban on selling sensitive data (not only opt-in consent). The threshold of 35,000 consumers is lower than most states. Controllers must handle profiling and automated decision-making with strong consumer protections, including documented risk assessments and opt-out rights. Impact assessments required per algorithm used in high-risk processing. Nonprofits are largely included. Universal opt-out signals required from day one. 60-day cure period with no sunset date in the current statute.
- MarylandIn Effect
Maryland Healthcare AI Utilization Review (HB 820)
May apply to AI tools used in healthcare coverage decisions, calling for determinations based on individual patient data rather than group datasets. Where applicable, final utilization review decisions may need to be made by a physician in the same specialty. Where applicable, carriers may need to report whether AI was used in adverse decisions. Does not ban AI in healthcare: where applicable it may require AI to use individual patient data and may mandate human physician final decisions.
- MarylandIn Effect
Maryland Nonconsensual Pornography Deepfake Expansion (SB 360)
Expands Maryland's revenge porn statute to cover AI-generated and computer-generated sexual imagery. Strengthens civil remedies for victims of synthetic intimate images.
- MarylandIn Effect
Maryland HB 1202 (Facial Recognition in Hiring)
Prohibits creating facial templates of job applicants during interviews without signed consent. Where applicable, the waiver may need to include the applicant's name, interview date, consent to facial recognition use, and whether the applicant read the waiver. Scope is narrower than Illinois BIPA: it only covers facial recognition during interviews, not biometric data collection generally.
- MarylandIn Effect
Maryland AI Governance Act of 2024 (SB 818)
Requires Maryland state agencies to inventory AI systems, conduct impact assessments, and follow DoIT policies for AI procurement and use. Applies to state government agencies only, not the private sector.
Minnesota
(2 tracked)
- MinnesotaIn Effect
Minnesota Consumer Data Privacy Act - ADM and profiling provisions
Minnesota is the first state privacy law to require controllers to create and maintain a data inventory. The right to question and challenge profiling decisions applies to any decisions with legal or significant effects (broader than the opt-out right, which only applies to automated decisions). Small business exemption (SBA-defined). Separate from this privacy act, Minnesota HF 1370 (election and NCII deepfake criminal and civil provisions) faces an active First Amendment challenge from X (formerly Twitter) over its election deepfake rules; that litigation does not govern CDPA compliance but is relevant context for Minnesota AI and elections risk.
- MinnesotaIn Effect
Minnesota Election and NCII Deepfake Law (HF 1370)
Criminalizes election deepfakes within 90 days before elections (no disclosure exception, one of the strictest in the country) and nonconsensual deepfake intimate imagery with escalating felony penalties. Facing First Amendment legal challenge from X (formerly Twitter); early rulings suggest courts are skeptical. Separate from the Minnesota Consumer Data Privacy Act (MCDPA). No disclosure exception for election deepfakes, unlike most states. Pending legal challenge may render election provisions unenforceable. Effective date shown as January 1, 2023 as a placeholder; exact effective date should be verified against Session Law Chapter 58 (2023).
Montana
(2 tracked)
- MontanaIn Effect
Montana Consumer Data Privacy Act - Profiling Provisions
Grants Montana consumers the right to opt out of profiling for decisions with legal or significant effects. 2025 amendments expanded profiling opt-out scope from solely automated decisions to all automated decisions (removing the solely qualifier). Applicability thresholds lowered: now 25,000 consumers (was 50,000) or 15,000 consumers if 25%+ revenue from data sales (was 25,000).
- MontanaIn Effect
Montana Right to Compute Act (SB 212)
Requires deployers of critical infrastructure facilities controlled by AI to develop a risk management policy based on recognized national and international standards (such as NIST AI RMF). The original mandatory shutdown capability was removed by amendment before enactment. Establishes a fundamental right to privately own and use computational resources for lawful purposes under strict scrutiny standard (any government restriction must be narrowly tailored to a compelling interest). Applies to 22 categories of critical infrastructure defined in Montana statutes.
New Hampshire
(1 tracked)
New Jersey
(2 tracked)
- New JerseyIn Effect
New Jersey Data Privacy Act (Profiling Provisions)
Uniquely covers nonprofits with no revenue threshold. Universal opt-out mechanism (UOOM) requirement effective July 15, 2025 (18 months after enactment), extended to profiling decisions, unique among state privacy laws. Proposed rules would require controller consent before using personal data to train AI models.
- New JerseyIn Effect
New Jersey Deepfake Penalties (S2544)
Establishes civil and criminal penalties for creating and distributing deepfakes, including AI-manipulated images and videos, with intent to deceive, harm, or defraud.
New York
(6 tracked)
- New YorkUpcoming
New York Responsible AI Safety and Education Act (RAISE Act, S6953B/A6453B)
New York's RAISE Act regulates frontier AI model developers. Requires publication of a frontier AI framework, quarterly confidential risk assessment filings with the DFS Office, transparency reports, 72-hour incident reporting, and registration with the DFS Office of Frontier AI Model Developer Transparency and Reporting. Applies to developers with $500M+ revenue operating models above 10^26 FLOP. Chapter amendment (S8828) signed March 2026 is the final version.
- New YorkIn Effect
New York AI Companion Models Law (A3008, Article 47)
Requires AI companion operators to disclose AI nature, provide reminders every 3 hours of use, and implement protocols to detect suicidal ideation or self-harm and refer users to crisis services. Carries the highest per-day penalties of any US AI law at $15,000 per day. Penalties fund the NY suicide prevention fund. Unlike California SB 243, the New York law does not include a private right of action.
- New YorkIn Effect
New York Personalized Algorithmic Pricing Disclosure (S 3008, 2025)
Requires businesses to disclose when personalized pricing is set by an algorithm using personal data so consumers know their price may differ from what others see. Statutory date July 8, 2025. The AG paused enforcement during NRF v. James (First Amendment challenge). The court dismissed the challenge October 8, 2025. Enforcement began November 10, 2025. Signed May 9, 2025 as part of omnibus budget bill S.3008. Required disclosure: THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA. Companies that collect protected class data may not combine it with algorithmic pricing to create different prices for different people (anti-discrimination provision beyond disclosure). AG-only enforcement with active enforcement posture (consumer alert issued). Exemptions: regulated financial institutions, state insurance entities, certain subscription contract prices.
- New YorkIn Effect
New York Deceased Performer Digital Replicas (SB 8391)
Amends the Right of Publicity law to protect digital replicas of deceased performers. Provides civil remedies for unauthorized AI-generated replicas of deceased individuals for commercial purposes.
- New YorkIn Effect
New York Digital Replica Contract Protections (S7676B)
Establishes protections for individuals regarding the use of digital replicas in professional contracts. Requires specific consent provisions for digital replica use in employment and performance contracts.
- New YorkUpcoming
New York Synthetic Performers Disclosure (SB 8420A)
Requires conspicuous disclosure in advertisements when AI-generated synthetic performers are used. A synthetic performer is a digitally created asset generated using AI intended to create the impression of a human performer not recognizable as any real person. Exempt: advertisements for expressive works (movies, TV, streaming, games), audio-only ads, and AI language translation.
Oregon
(3 tracked)
- OregonIn Effect
Oregon Consumer Privacy Act - Profiling Provisions
Grants Oregon consumers the right to opt out of profiling for decisions with legal or significant effects. Opt-out limited to profiling in furtherance of solely automated decisions (narrower than Colorado, Connecticut, and Montana which have removed the solely qualifier). 2025 amendments (SB 2008, SB 3875) prohibit sale of personal data for consumers under 16 and precise geolocation data (effective October 1, 2025). The Oregon CPA included a 30-day cure window that was not permanent: it ended January 1, 2026, after which the Attorney General is no longer required to provide a cure notice before enforcement.
- OregonIn Effect
Oregon Synthetic Intimate Imagery (HB 2299)
Criminal penalties for creating or distributing AI-generated nonconsensual intimate imagery in Oregon. Expands intimate-image offenses to cover realistic synthetic depictions.
- OregonIn Effect
Oregon Election Deepfake Disclosure (SB 1571)
Requires disclosure statement on political communications containing synthetic media in Oregon elections.
Tennessee
(2 tracked)
- TennesseeIn Effect
Tennessee ELVIS Act (Ensuring Likeness, Voice, and Image Security)
First enacted US legislation specifically designed to protect musicians from unauthorized AI voice synthesis. Covers any individual's voice, image, or likeness. Uniquely targets tool providers (developers of synthesis tools), not just end users. Expands Tennessee's right of publicity to cover unauthorized synthetic replicas and includes a fair use exemption that specifically incorporates voice replicas.
- TennesseeIn Effect
Tennessee Information Protection Act - Profiling Provisions
Grants Tennessee consumers the right to opt out of profiling for decisions with legal or significant effects. First state to provide a NIST affirmative defense: controllers and processors that create, maintain, and comply with a written privacy program that reasonably conforms to the NIST Privacy Framework may assert an affirmative defense. This materially reduces compliance risk for NIST-aligned organizations. Applies to businesses with annual revenue exceeding $25 million that also meet consumer data volume thresholds (175,000 consumers, or 25,000 consumers with 50%+ revenue from data sales). Among the highest applicability thresholds of any state privacy law.
Texas
(6 tracked)
- TexasIn Effect
Texas TRAIGA (Responsible Artificial Intelligence Governance Act, HB 149)
Texas RAIGA (HB 149) prohibits AI systems from intentionally manipulating behavior to cause harm, infringing constitutional rights, or discriminating against protected classes. Where applicable, government agencies may need to disclose AI interactions. Updates biometric consent for AI training data. Creates a regulatory sandbox program. AG exclusive enforcement with 60-day cure period. Intent-based liability standard (no disparate impact).
- TexasIn Effect
Texas TRAIGA Biometric and AI Training Amendments (HB 149, 89th Legislature)
Amends the Texas Capture or Use of Biometric Identifier Act (CUBI) and related Business and Commerce Code provisions for biometric data used with AI. Relaxes CUBI for AI training with a carveout for publicly available data and adds anti-scraping consent requirements for biometric identifiers. Enforced under the same HB 149 TRAIGA framework as the core act: intent-based liability, 60-day cure, preemption of local AI rules, and the statutory penalties and safe harbors that apply to TRAIGA generally.
- TexasIn Effect
Texas Nonconsensual Intimate Deepfakes (SB 441)
Criminalizes creating and distributing nonconsensual intimate deepfakes. Creates civil liability for victims. Platforms must take down reported content within 72 hours. Consent to create an image does not constitute consent to share it.
- TexasIn Effect
Texas Data Privacy and Security Act, Profiling Provisions (HB 4)
Texas comprehensive privacy law with profiling provisions. Requires data protection assessments for profiling that presents a risk of harm. Consumer opt-out for profiling producing legal or similarly significant effects, targeted advertising, and sale of personal data. Universal opt-out mechanism required for covered profiling opt-outs. Broad applicability: no revenue or data volume thresholds (unlike many state privacy laws). Small businesses as defined by the SBA are exempt. The 30-day cure period is permanent with no sunset. Profiling opt-out applies to decisions with legal or similarly significant effects, not all profiling.
- TexasIn Effect
Texas SB 1188 - Healthcare AI Practitioner Disclosure
Requires healthcare providers using AI-enabled clinical support features in electronic health record workflows to disclose AI involvement in clinical decision support contexts. Applies to AI-assisted diagnosis, treatment recommendations, and clinical support pathways in covered settings.
- TexasIn Effect
Texas Government AI Ethics and Oversight (SB 1964)
Requires Texas state agencies and local governments to inventory AI systems, adopt an AI code of ethics aligned with NIST AI RMF, conduct impact assessments for AI that autonomously influences consequential decisions, and disclose AI use to affected individuals. Applies to government entities only, not the private sector.
Utah
(5 tracked)
- UtahIn Effect
Utah AI Mental Health Chatbot Regulation (HB 452)
Regulates AI-powered mental health chatbots. Requires clear disclosure that the service is not a human clinician, limits certain advertising during therapeutic-style conversations, and restricts sharing identifiable health information. Specific disclosure timing: before user can access the chatbot, after 7 days without use, and whenever asked by the user. Health data restrictions: businesses cannot share or sell individually identifiable health information or user input with third parties, except as necessary for chatbot function or to health providers with user consent under HIPAA. Advertising restrictions: ads delivered through chatbot must be disclosed, and no user input can be used to decide whether to advertise or to customize ads. Affirmative defense requires actual policy filing with the Division of Consumer Protection, not only having a policy on file internally.
- UtahIn Effect
Utah Artificial Intelligence Policy Act (SB 149)
SB 332 extended the act's sunset from May 7, 2025 to July 1, 2027. SB 226 NARROWED disclosure requirements (not added). General consumer transactions now require disclosure only upon clear and unambiguous request. Regulated occupations require proactive disclosure for high-risk artificial intelligence interactions involving sensitive data or significant decisions. Safe harbor: if the AI system itself clearly and conspicuously discloses at the outset and throughout the interaction that it is nonhuman or AI, the entity is not subject to enforcement action. See Utah AI Policy Act Amendments (SB 226) for the 2025 amendments.
- UtahIn Effect
Utah AI Policy Act Amendments (SB 226 / SB 332)
SB 226 narrowed UAIPA disclosure: general consumer contexts require disclosure only on a clear and unambiguous request; regulated occupations still require proactive disclosure for high-risk artificial intelligence interactions (sensitive data and significant decisions). SB 332 extended the act's sunset from May 7, 2025 to July 1, 2027. Safe harbor unchanged: no enforcement if generative AI clearly and conspicuously discloses it is nonhuman at the outset and throughout the interaction. Applies together with the Utah Artificial Intelligence Policy Act (SB 149).
- UtahIn Effect
Utah Unauthorized AI Impersonation (SB 271)
Expands Utah's abuse of personal identity law to cover AI-generated deepfakes and digital replicas used for commercial purposes without consent. Prohibits distributing software primarily designed for unauthorized commercial impersonation. Covers AI-generated simulations of voice, video likeness, and audiovisual appearance. Not limited to deepfakes: it covers commercial misuse of personal identity including non-AI methods. First Amendment exemptions for newsworthiness, artistic expression, and parody. The software distribution prohibition targets nudification apps and similar tools.
- UtahIn Effect
Utah Consumer Privacy Act, Profiling Provisions (SB 227)
Utah's comprehensive privacy law. It is the least restrictive state privacy law regarding profiling and ADM among comparable statutes: it includes opt-out for targeted advertising and sale of personal data, but does not include a general profiling opt-out or ADM impact assessment requirement. No universal opt-out mechanism requirement. Trackers sometimes incorrectly list Utah as having ADM provisions similar to other states.
Virginia
(2 tracked)
- VirginiaIn Effect
Virginia Consumer Data Protection Act (VCDPA, Profiling Provisions)
Grants Virginia consumers the right to opt out of profiling in furtherance of decisions that produce legal or significant effects. The VCDPA excludes employee and HR data and B2B data from scope; profiling opt-out applies to consumer data only. Permanent 30-day cure period with no sunset: unlike most states, Virginia does not phase out the cure opportunity. Virginia does not require universal opt-out mechanisms (UOOMs). Virginia does not grant the Attorney General rulemaking authority. In March 2025, Governor Youngkin vetoed HB 2094, a comprehensive AI act that would have materially expanded the privacy and AI framework. Virginia is the most business-friendly state privacy law among comparable statutes: permanent cure, no UOOMs, no AG rulemaking, and a high revenue threshold (50% for the 25,000-consumer applicability tier).
- VirginiaIn Effect
Virginia Nonconsensual Pornography (Deepfake Coverage)
Virginia's nonconsensual pornography statute criminalizes dissemination of intimate images created by any means whatsoever, which courts interpret to include AI-generated deepfakes. Enhanced penalties for repeat offenses and content involving minors.
Washington
(7 tracked)
- WashingtonUpcoming
Washington SB 5395 (AI in Health Insurance Prior Authorization)
Enacted as Chapter 157, Laws of 2026; Governor signed March 23, 2026; effective June 11, 2026. AI tools may be used to approve prior authorization requests but may not deny care without human review by a licensed physician or health professional. Where applicable, managed care organizations may need to report the percentage of total denials aided by AI. Periodic performance reviews of AI tools may be required for accuracy and reliability.
- WashingtonIn Effect
Washington My Health My Data Act
Broad health data privacy law covering health data collected outside HIPAA, including data from health-related AI tools, wearables, and wellness apps. Defines consumer health data extremely broadly to include data not typically considered health-related: biometric data, bodily function data, and inferences derived from non-health data. Applies to Washington residents and any person whose health data is collected in Washington (potential extraterritorial reach). Much broader than reproductive health alone: it covers all consumer health data outside HIPAA. Geofencing ban around healthcare facilities effective July 2023. Regulated entities: compliance from March 31, 2024; small businesses from June 30, 2024. Multiple lawsuits filed, establishing early case law.
- WashingtonUpcoming
Washington AI Chatbot Safety for Minors (HB 2225)
First-in-nation law requiring AI chatbot operators to disclose AI nature at regular intervals (every 3 hours for adults, every hour for minors) and implement safety measures to protect minors from manipulation, explicit content, and emotional exploitation. Includes self-harm and crisis protocols. Targets conversational AI engagement patterns specifically.
- WashingtonUpcoming
Washington AI Content Disclosure Act (HB 1170)
AI content provenance and watermarking requirements for providers with 1 million or more monthly Washington users. Requires latent disclosures (watermarks) in AI-generated image, video, and audio content. Extended implementation period to January 1, 2028. Only applies to providers with 1 million or more monthly Washington users. Closely aligned with California SB 942 (California AI Transparency Act). AG-exclusive enforcement under the Consumer Protection Act. No private right of action.
- WashingtonIn Effect
Washington Fabricated Intimate Images (2024)
Criminalizes creation and distribution of AI-generated intimate images without consent. Provides civil remedies for victims.
- WashingtonIn Effect
Washington Election Deepfake Disclosure (SB 6280)
Requires clear and conspicuous disclosure when AI-generated or AI-manipulated media is used in political advertising or communications. One of the first state laws specifically targeting deepfakes in elections.
Federal
(10 tracked)
- Federal (EEOC)In Effect
EEOC Guidance on AI in Employment Selection
EEOC technical assistance documents explain how existing Title VII and ADA obligations apply to AI and algorithmic employment tools. Not binding regulation, but signals enforcement priorities. Employers are liable for adverse impact from AI tools even when tools are designed by third-party vendors. Requires adverse impact analysis per UGESP four-fifths rule. ADA prohibits AI tools that screen out individuals with disabilities or make pre-offer disability inquiries.
- FederalUpcoming
TAKE IT DOWN Act (S. 146)
Requires covered online platforms to remove reported nonconsensual intimate imagery, including AI-generated deepfakes, within a short deadline after a valid notice. Dual effective dates: criminal provisions effective May 19, 2025 (date signed into law). Platform compliance deadline: May 19, 2026 (one year after signing). First federal law limiting the use of AI in ways harmful to individuals. Covers both authentic NCII and AI-generated deepfakes. Does not preempt state laws. FTC jurisdiction extended to nonprofit entities. First and only enacted federal AI-specific law signed by the Trump administration. Bipartisan 409-2 House vote, unanimous Senate passage.
- Federal (FTC)In Effect
FTC Enforcement Policy on AI and Algorithmic Fairness
FTC enforces Section 5 of the FTC Act against deceptive and unfair AI practices. Key areas: unsubstantiated AI marketing claims, AI products harmful to children, discriminatory AI outcomes, and deceptive AI-powered services. Operation AI Comply (September 2024) targeted five companies simultaneously. Algorithmic disgorgement remedy requires deletion of AI models trained on improperly collected data. Administration change in 2025 narrowed speculative risk enforcement but maintained fraud and misrepresentation focus.
- Federal (DOJ)In Effect
DOJ AI Litigation Task Force
Coordinates federal civil litigation strategy on AI-related matters across the Department of Justice. Executive orders cannot preempt state law. Only Congress or courts can do that. Task Force is authorized to file lawsuits challenging state laws but as of April 2026 has NOT filed any. Congress rejected federal preemption twice: Senate vote 99-1 in July 2025, preemption language also dropped from NDAA in December 2025.
- Federal (FDA)In Effect
FDA AI/ML Medical Device Framework
FDA requires pre-market review (510(k), De Novo, PMA) for AI/ML-based software that meets the definition of a medical device. Over 1,000 AI/ML-enabled devices authorized as of 2025. Includes predetermined change control plan for adaptive AI/ML devices. Most mature federal AI regulatory framework. Sector-specific. Has been operating for years.
- Federal (HUD)In Effect
HUD AI Guidance in Housing
Fair Housing Act disparate impact standard applies to AI-driven tenant screening, lending algorithms, and property valuations. HUD 2023 disparate impact rule (reinstated) allows challenges to facially neutral AI practices with discriminatory effects. Meta 2022 settlement over AI ad targeting in housing is a key precedent. Disparate impact rule status under Trump administration should be monitored.
Numbers reflect the live catalog and site configuration.
See the match before you commit
Lead with the scan. Add a work email only if you want updates. Nothing here blocks the free exposure path.