NIST AI Risk Management Framework 1.0

Publisher: National Institute of Standards and Technology Version: 1.0

A voluntary framework for managing risks throughout the AI lifecycle. Organized around four core functions: Govern, Map, Measure, and Manage. Includes the Generative AI Profile (AI 600-1) for foundation model risks.

Domains: Tier:
ID Name Description Domains
GV GOVERN — Policies, Accountability & Culture The GOVERN function establishes and maintains the organizational structures, policies, processes, and culture necessa...
govern
GV-1 Policies, processes, and practices AI policies, processes, procedures, and practices are in place across the organization to map, measure, manage, and g...
govern
GV-2 Accountability structures Roles, responsibilities, and lines of communication related to AI risk management are established with clear accounta...
govern
GV-3 Diversity, equity, inclusion, and accessibility Organizational teams building, deploying, and using AI systems reflect diversity, and are proactive in addressing har...
govern
GV-4 Organizational culture and commitment Organizational culture and leadership foster responsible stewardship of trustworthy AI aligned with societal values. ...
govern
GV-5 Stakeholder engagement Organizational practices are in place to enable AI deployment and ongoing use with input from affected communities an...
govern
GV-6 Supply chain risk management AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documente...
govern
MP MAP — Context & Risk Identification The MAP function establishes the context for AI risk management by identifying the intended purpose, deployment conte...
build
MP-1 Context establishment Legal, regulatory, and societal contexts of AI system deployment are identified and documented, including norms and e...
build
MP-2 Categorization and risk tiering AI systems are categorized based on their intended use, beneficiaries, and potential for harm, enabling risk-proporti...
build
MP-3 AI capabilities and limitations AI system capabilities, intended purposes, context of use, and known limitations are documented. Every AI system has ...
build
MP-4 Risk identification and analysis AI risks and benefits are identified, assessed, prioritized, and documented covering technical, societal, and ethical...
build
MP-5 Impact assessment AI system impacts on individuals, groups, communities, organizations, and society are identified and assessed. Impact...
build
MS MEASURE — Assessment, Metrics & Testing The MEASURE function employs quantitative and qualitative methods to assess AI system trustworthiness characteristics...
monitor
MS-1 Measurement approach Appropriate methods and metrics are identified and applied to measure AI system performance, fairness, safety, and ot...
monitor
MS-2 Testing and validation AI systems undergo testing and validation including adversarial testing, bias testing, and red-teaming appropriate to...
monitor
MS-3 Competency and expertise AI system measurement activities are performed by individuals and teams with appropriate domain knowledge, technical ...
monitor
MS-4 External inputs and validation Measurement processes incorporate external perspectives, independent testing, and stakeholder feedback to validate AI...
monitor
MG MANAGE — Risk Response & Communication The MANAGE function allocates resources, implements controls, deploys mitigations, and establishes ongoing monitoring...
deploy monitor
MG-1 Purpose evaluation AI systems are evaluated to determine whether their intended purpose, use cases, and deployment context are appropria...
deploy monitor
MG-2 Risk and benefit balancing AI system risks and benefits are balanced and managed based on expected impact, with risk tolerance aligned to organi...
deploy monitor
MG-3 Lifecycle monitoring AI systems are monitored throughout their lifecycle to detect performance degradation, drift, emerging risks, and uni...
deploy monitor
MG-4 TEVV (Test, Evaluation, Verification, and Validation) TEVV processes are implemented and iterated throughout the AI lifecycle to ensure systems function as intended and al...
deploy monitor
AI 600-1 Generative AI Profile NIST AI 600-1 is a companion document to the AI RMF that addresses risks unique to generative AI systems including fo...
secure build
GAI-1 CBRN Information and Capabilities Risk that generative AI could lower barriers to development or use of chemical, biological, radiological, or nuclear ...
secure build
GAI-2 Confabulation (Hallucination) Risk of generative AI producing false, fabricated, or inconsistent outputs presented as factual information with high...
secure build
GAI-3 Data Privacy Risk of generative AI exposing, inferring, or memorizing personal or sensitive information from training data. Large ...
secure build
GAI-4 Environmental Impact Risk of significant energy consumption and carbon emissions from training and operating large generative AI models. T...
secure build
GAI-5 Homogenization of Content and Perspectives Risk that widespread generative AI use reduces diversity of content, ideas, and cultural expressions. When everyone u...
secure build
GAI-6 Human-AI Configuration Risk of inappropriate reliance on generative AI, automation bias, or degradation of human skills and judgment. When h...
secure build
GAI-7 Information Integrity Risk of generative AI enabling misinformation, disinformation, deepfakes, and manipulation of information ecosystems ...
secure build
GAI-8 Information Security Risk of adversarial attacks targeting generative AI systems including prompt injection, jailbreaking, model inversion...
secure build
GAI-9 Intellectual Property Risk of generative AI infringing copyright, reproducing protected works, or creating unclear IP ownership of AI-gener...
secure build
GAI-10 Obscene, Degrading, and Abusive Content Risk of generative AI producing violent, sexual, hateful, or otherwise harmful content. Without safety guardrails, ge...
secure build
GAI-11 Toxicity, Bias, and Homogenization Risk of generative AI amplifying stereotypes, producing biased outputs, or exhibiting toxic language patterns. Langua...
secure build
GAI-12 Value Chain and Component Integration Risk from dependencies on third-party foundation models, APIs, plugins, or components with unknown properties or vuln...
secure build