NIST AI Risk Management Framework (AI RMF 1.0)

FEDERALFor all AI companiesMedium severityIn effect

In effect since January 26, 2023

Overview

NIST AI RMF is a voluntary framework used as a practical benchmark by regulators and lawmakers. Following it can support defensible controls under multiple state AI laws.

This is federal enforcement guidance.

Who this applies to

This regulation applies to both companies that build AI products and companies that use AI tools from other vendors.

AI categories covered

  • Employment and hiring
  • Consumer-facing AI
  • Healthcare AI
  • Financial services AI
  • Insurance

Specific AI use cases:

  • Resume screening and ranking
  • Credit scoring and risk assessment
  • Insurance underwriting

What this requires you to do

  • Risk management program required

    Implement a risk management program. Maintain ongoing processes to identify, assess, and mitigate AI-related risks.

  • Impact assessment required

    Complete an impact assessment. Document the potential risks and effects of your AI system on affected people.

  • Bias testing required

    Perform bias testing. Test your AI systems for discriminatory impact across protected classes.

  • Record-keeping required

    Maintain records. Keep documentation of your AI systems, decisions made, and compliance activities.

Enforcement and penalties

NIST AI RMF is voluntary. Alignment may support an affirmative defense or safe-harbor style argument under state AI frameworks that reference NIST practices.

Source

Read the full text

https://www.nist.gov/itl/ai-risk-management-framework

Always verify current language and amendments at the official source.

Other United States (NIST) regulations

Explore more rules in the same jurisdiction that may apply to your AI systems.

Want to know what else applies to your company?

Run a free XIRA scan to see all regulations that match your states and AI tools.