Skip to content
OnticBeta
Tier 2 — Industry Standard

NIST AI Risk Management Framework (AI RMF 1.0) — Oracle Source

Publisher

National Institute of Standards and Technology (NIST), U.S. Department of Commerce

Version

v1

Last verified

February 15, 2026

Frameworks

NIST AI RMF 1.0NIST AI 600-1

Industries

Applies to all industries

NIST AI RMF - Overview

The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), released on January 26, 2023, is a voluntary, flexible, and rights-preserving framework designed to help organisations that design, develop, deploy, or use AI systems manage the risks of AI throughout the AI lifecycle [cite:493][cite:496]. It is the de facto U.S. federal standard for trustworthy AI governance — voluntary in name but increasingly mandatory in practice through federal procurement requirements, state legislation (Colorado AI Act), executive orders (EO 14110), and its growing use as a defensibility benchmark in litigation [cite:491][cite:510]. The framework is built on four core functions — Govern, Map, Measure, and Manage — that form a continuous cycle of risk assessment, treatment, and improvement [cite:487][cite:493]. These functions are grounded in seven characteristics of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed [cite:513][cite:509]. In July 2024, NIST published AI 600-1, a Generative AI Profile that extends the AI RMF to address 12 risks unique to or exacerbated by generative AI, including confabulation, CBRN information access, data privacy, and information integrity [cite:488][cite:489]. The framework is accompanied by a Playbook providing suggested actions for each subcategory and official crosswalks mapping the AI RMF to the EU AI Act, ISO/IEC 42001, and other international standards [cite:499][cite:494].


NIST AI RMF - What It Is

Framework Architecture

The AI RMF 1.0 consists of two parts [cite:493]:

  • Part 1 — Foundational information on how organisations can frame AI risks and the characteristics of trustworthy AI systems
  • Part 2 — The Core framework — four functions (Govern, Map, Measure, Manage) with categories and subcategories that provide actionable outcomes for AI risk management

Seven Characteristics of Trustworthy AI

The AI RMF defines seven socio-technical characteristics that trustworthy AI systems should exhibit [cite:513][cite:509][cite:502]:

CharacteristicDescription
Valid and reliableSystem performs as intended, with consistent and reproducible outputs across expected operating conditions
SafeSystem operates without causing unreasonable risk of harm to people, property, or the environment
Secure and resilientSystem withstands adverse events, attacks, and unexpected conditions while maintaining functionality
Accountable and transparentProcesses, decisions, and outcomes are documented, auditable, and communicated to stakeholders
Explainable and interpretableSystem outputs and decision processes can be understood by humans at appropriate levels of detail
Privacy-enhancedSystem protects individuals' privacy through design choices, data governance, and appropriate limitations
Fair, with harmful bias managedSystem does not generate unfair outcomes or perpetuate harmful biases; disparate impacts are identified and mitigated

Accountable and Transparent is a cross-cutting characteristic — it relates to and enables all other characteristics [cite:493][cite:517].

Relationship to EU AI Act and ISO 42001

The NIST AI RMF is the U.S. counterpart to the EU AI Act, but with critical structural differences [cite:491][cite:519]:

DimensionNIST AI RMFEU AI ActISO 42001
NatureVoluntary framework [cite:494]Binding regulation [cite:512]Certifiable international standard [cite:519]
ApproachRisk-based, flexible, use-case agnostic [cite:494]Risk-tiered (unacceptable/high/limited/minimal risk) [cite:512]Management system with PDCA cycle [cite:519]
ScopeAll AI systems, all organisations [cite:494]AI systems placed on EU market or affecting EU residents [cite:512]Any organisation managing AI systems [cite:519]
EnforcementNo direct enforcement; referenced in procurement, state laws, litigation [cite:510]Administrative fines up to €35M or 7% of global turnover [cite:512]Third-party certification audit [cite:519]
CrosswalksOfficial NIST crosswalks to EU AI Act and ISO 42001 [cite:494]References international standards as conformity paths [cite:512]Harmonised Structure enables integration with NIST AI RMF [cite:519]

Used together, NIST AI RMF provides the practical risk management depth, ISO 42001 provides the certifiable governance system, and the EU AI Act provides the regulatory compliance requirements [cite:519].


NIST AI RMF - Who It Applies To

The AI RMF is designed to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organisations of all sizes and in all sectors [cite:494][cite:493].

Mandatory or De Facto Mandatory Application

  • Federal government agencies — Executive Order 14110 on Safe, Secure, and Trustworthy AI (October 2023) references NIST standards and directs agencies to implement AI risk management aligned with NIST frameworks [cite:491][cite:500]
  • Federal contractors and vendors — OMB guidance following EO 14110 includes AI minimum risk standards and their incorporation into federal contracts. Federal agencies increasingly require AI RMF alignment from vendors [cite:500][cite:491]
  • Colorado AI Act entities — Colorado SB 24-205 (effective February 1, 2026) — the first comprehensive U.S. state AI law — requires developers and deployers of high-risk AI systems to implement risk management programmes. It explicitly provides an affirmative defense for organisations that comply with the NIST AI RMF or ISO 42001 [cite:512][cite:515][cite:510]
  • Defence and intelligence — DoD AI adoption strategy and DFARS requirements increasingly reference NIST AI standards for AI systems in defence and intelligence applications [cite:500]

Voluntary Adoption

  • Any organisation designing, developing, deploying, or using AI systems in any sector [cite:494]
  • Financial services — Increasingly adopted as baseline for AI model risk management alongside SR 11-7 and OCC guidance [cite:491]
  • Healthcare — FDA digital health guidance and HIPAA intersect with AI RMF requirements for safety, privacy, and bias management [cite:491]
  • Critical infrastructure — Energy, transportation, telecommunications sectors adopting AI RMF as operational risk framework [cite:491]

AI Actors (Roles)

The AI RMF applies to all "AI actors" — anyone who plays a role in the AI system lifecycle [cite:493][cite:494]:

AI ActorAI RMF Responsibility
DesignersContext mapping, risk identification, trustworthiness requirements specification
DevelopersImplementation of trustworthiness characteristics, testing, measurement, documentation
DeployersDeployment risk assessment, monitoring, incident response, user communication
OperatorsDay-to-day system oversight, performance monitoring, issue escalation
UsersInformed use, feedback reporting, understanding system limitations
Affected individualsStakeholders impacted by AI system decisions; their interests must be represented in governance
Governing bodiesOversight of AI risk governance, resource allocation, accountability structures

NIST AI RMF - What It Requires - GOVERN Function

The GOVERN function is the cross-cutting foundation — it establishes the organisational structures, policies, processes, and culture needed to manage AI risks. It applies to all stages of the AI lifecycle and infuses into the other three functions [cite:493][cite:487].

Subcategories

GOVERN 1 — Policies, processes, and procedures [cite:487]:

  • Policies for mapping, measuring, and managing AI risks are established, transparent, and implemented effectively across the organisation
  • Legal and regulatory requirements are identified and integrated
  • Risk tolerance is defined and communicated
  • Processes are in place for ongoing review and updating of governance structures

GOVERN 2 — Accountability structures [cite:487]:

  • Roles, responsibilities, and lines of authority for AI risk management are clearly defined
  • Teams and individuals are trained and empowered to handle AI risks
  • Accountability extends from the governing body through operational management to individual practitioners

GOVERN 3 — Diversity, equity, inclusion, and accessibility (DEIA) [cite:487]:

  • DEIA considerations are prioritised throughout the AI lifecycle
  • Diverse perspectives inform design, development, deployment, and evaluation
  • Accessibility requirements are addressed in AI system design

GOVERN 4 — Organisational culture [cite:487]:

  • Commitment to creating a culture of awareness and open communication about AI risks
  • Risk management culture is fostered at all levels
  • Feedback mechanisms identify and address risks as they arise

GOVERN 5 — Stakeholder engagement [cite:487]:

  • Robust processes engage relevant AI stakeholders effectively
  • Internal teams, external collaborators, end users, and affected communities provide input
  • Engagement is ongoing, not one-time

GOVERN 6 — Third-party and supply chain risk [cite:487]:

  • Policies manage risks related to third-party software, data, models, and other supply chain elements
  • Due diligence and monitoring extend to all AI supply chain participants
  • Risks from dependencies on external AI components are identified and managed

E/A/D Axis Mapping — GOVERN

The GOVERN function maps directly to the governance layer of the E/A/D (Ethical/Accountable/Defensible) axes:

  • Ethical axis — GOVERN 3 (DEIA), GOVERN 4 (culture), GOVERN 5 (stakeholder engagement) establish the ethical foundation
  • Accountable axis — GOVERN 1 (policies), GOVERN 2 (accountability structures), GOVERN 6 (supply chain) establish traceable accountability
  • Defensible axis — GOVERN 1 (documented policies and procedures) creates the evidence base for legal and regulatory defensibility; Colorado AI Act safe harbor requires demonstrable NIST AI RMF compliance [cite:515][cite:510]

NIST AI RMF - What It Requires - MAP Function

The MAP function identifies and contextualises AI risks. It is the first operational step in the AI lifecycle — understanding the system, its context, its stakeholders, and its potential impacts before development and deployment [cite:487][cite:490].

Subcategories

MAP 1 — Context understanding [cite:487]:

  • Intended purpose, deployment context, and operational conditions are fully documented
  • Legal and regulatory requirements applicable to the specific use case are identified
  • User needs and societal expectations are understood

MAP 2 — AI system categorisation [cite:487]:

  • System is properly categorised by type, capability, risk level, and deployment context
  • Tasks, data sources, and model outputs are broken down to clarify function and risk areas

MAP 3 — Capabilities and benchmarks [cite:487]:

  • AI capabilities, goals, and costs are compared to relevant benchmarks
  • Performance expectations are established and documented
  • Limitations and known failure modes are identified

MAP 4 — Risk mapping across the system [cite:487]:

  • Risks and benefits are mapped across all parts of the AI system, including third-party elements
  • Data risks, model risks, integration risks, and deployment risks are systematically identified
  • Risk mapping covers the full AI lifecycle from data acquisition through decommissioning

MAP 5 — Impact identification [cite:487]:

  • Impacts on individuals, communities, and society are clearly identified
  • Both intended and unintended impacts are assessed
  • Differential impacts across populations and contexts are evaluated

E/A/D Axis Mapping — MAP

  • Ethical axis — MAP 5 (impact identification) ensures impacts on individuals and communities are understood before deployment
  • Accountable axis — MAP 1–4 (context, categorisation, benchmarks, risk mapping) create the documented risk profile that enables accountability
  • Defensible axis — Documented risk mapping provides the evidentiary foundation for demonstrating due diligence in litigation or regulatory inquiry [cite:510][cite:515]

NIST AI RMF - What It Requires - MEASURE Function

The MEASURE function quantifies and evaluates AI risks using appropriate methods, metrics, and testing protocols. It bridges the gap between risk identification (MAP) and risk treatment (MANAGE) [cite:487][cite:509].

Subcategories

MEASURE 1 — Methods and metrics [cite:487][cite:509]:

  • Appropriate quantitative and qualitative methods and metrics are identified and applied
  • Metrics track system functionality, trustworthiness, social impact, and human-AI interaction
  • Measurement methodologies follow scientific, legal, and ethical norms

MEASURE 2 — Trustworthiness evaluation [cite:487][cite:509]:

  • AI systems are evaluated for all seven trustworthiness characteristics
  • Rigorous software testing, performance assessments, benchmarks, and formalised reporting ensure reliability
  • Independent reviews and external evaluations mitigate biases and conflicts of interest

MEASURE 3 — Risk tracking over time [cite:487][cite:509]:

  • Systems are in place to track AI risks over time — not just at deployment but throughout operation
  • Performance degradation, data drift, and context changes are monitored
  • Risk indicators are updated as the system and its environment evolve

MEASURE 4 — Feedback and effectiveness [cite:487][cite:509]:

  • Feedback on how well measurement processes work is gathered and assessed
  • Measurement approaches are refined based on operational experience
  • Scalable and adaptable methods are developed as AI risks evolve

E/A/D Axis Mapping — MEASURE

  • Ethical axis — MEASURE 2 (trustworthiness evaluation including fairness and bias) directly measures ethical performance
  • Accountable axis — MEASURE 1 (documented metrics), MEASURE 3 (tracked over time) create the quantitative accountability record
  • Defensible axis — MEASURE 4 (effectiveness assessment) provides the evidence that risk management is not merely performative but demonstrably effective [cite:509]

NIST AI RMF - What It Requires - MANAGE Function

The MANAGE function allocates resources to address mapped and measured risks. It implements risk treatment, incident response, and ongoing monitoring [cite:487][cite:509].

Subcategories

MANAGE 1 — Risk prioritisation and treatment [cite:487]:

  • AI risks identified in MAP and MEASURE are prioritised based on significance
  • Risk treatment decisions (mitigate, accept, transfer, avoid) are documented with rationale
  • Resources are allocated proportionate to risk severity

MANAGE 2 — Benefit maximisation and risk minimisation [cite:487]:

  • Strategies to maximise AI's benefits and minimise risks are planned, documented, and informed by expert input
  • Trade-offs between trustworthiness characteristics are managed transparently
  • Risk management strategies align with broader organisational goals

MANAGE 3 — Third-party risk management [cite:487]:

  • Risks associated with third-party AI tools, models, data, and infrastructure are actively managed
  • Ongoing monitoring extends to all AI supply chain dependencies
  • Contract provisions and SLAs address AI risk requirements

MANAGE 4 — Incident response and communication [cite:487]:

  • Risk response, recovery plans, and communication strategies are established and regularly updated
  • Incident response protocols define escalation paths, notification requirements, and remediation processes
  • Post-incident reviews feed lessons learned back into the GOVERN and MAP functions

E/A/D Axis Mapping — MANAGE

  • Ethical axis — MANAGE 2 (benefit/risk trade-offs managed transparently with expert input)
  • Accountable axis — MANAGE 1 (documented prioritisation and treatment), MANAGE 3 (third-party accountability)
  • Defensible axis — MANAGE 4 (incident response, communication, documented remediation) provides the response evidence that regulators and courts examine post-incident [cite:510]

NIST AI RMF - What It Requires - Generative AI Profile (AI 600-1)

NIST AI 600-1, released July 26, 2024, is a cross-sectoral companion resource to the AI RMF specifically addressing risks unique to or exacerbated by Generative AI (GAI), pursuant to Executive Order 14110 [cite:488][cite:489][cite:496].

12 Generative AI-Specific Risks

RiskDescription
CBRN information or capabilitiesGAI could facilitate access to or synthesis of information related to chemical, biological, radiological, or nuclear weapons [cite:511][cite:518]
ConfabulationProduction of false, fabricated, or misleading content ("hallucinations") [cite:511][cite:509]
Dangerous or violent contentGeneration of content that facilitates harmful, violent, or hateful actions [cite:511][cite:518]
Data privacyRisks of leaking sensitive or personally identifiable information from training data or inputs [cite:511][cite:518]
Environmental impactResource-intensive model training and inference with significant energy and water consumption [cite:514][cite:511]
Human-AI configurationRisks from inappropriate levels of human oversight, over-reliance, or automation bias [cite:514]
Information integrityGeneration and dissemination of false or misleading information at scale, eroding public trust [cite:518][cite:511]
Information securityVulnerability to cyberattacks (data poisoning, prompt injection, model extraction) [cite:518][cite:511]
Intellectual propertyUse of protected materials in training and inputs leading to copyright or IP infringement [cite:518][cite:511]
Obscene, degrading, or abusive contentGeneration of content that is obscene, degrading, or harmful to individuals or groups [cite:509]
Value chain and component integrationRisks from complex GAI supply chains involving multiple models, data sources, and integration points [cite:509]
HomogenisationOver-reliance on similar models or data leading to systemic risks and reduced diversity of outputs [cite:509]

AI 600-1 Structure

The GenAI Profile maps suggested actions to the same Govern/Map/Measure/Manage structure as the AI RMF Core [cite:488][cite:489]:

  • Each of the 12 risks is addressed with specific suggested actions aligned to AI RMF subcategories
  • Actions are additive to (not replacements for) the base AI RMF 1.0 and Playbook
  • Organisations are expected to use AI 600-1 alongside — not instead of — the base framework [cite:488][cite:489]

NIST AI RMF - Governance Implications

The AI RMF is fundamentally a governance framework — it exists to establish the structures, processes, and accountability mechanisms through which organisations manage AI risk responsibly [cite:493][cite:487].

Ontic BOM Mapping

  • model — AI/ML models are the primary subject of the AI RMF. Each model requires MAP (context, categorisation, impact identification), MEASURE (trustworthiness evaluation, metrics, monitoring), and MANAGE (risk treatment, incident response) activities. Model governance — validation, bias testing, change management, performance monitoring, and decommissioning — maps directly to AI RMF functions. The AI 600-1 GenAI Profile adds specific requirements for generative models [cite:487][cite:488]
  • oracle — Training datasets, evaluation benchmarks, regulatory requirements databases, risk intelligence feeds, and authoritative reference data are the informational foundation of AI RMF implementation. MAP 1 (context), MAP 3 (benchmarks), and GOVERN 6 (supply chain) all require reliable, current, traceable oracle data. Data provenance, quality, and bias are assessed under MAP 4 and MEASURE 2 [cite:487][cite:493]
  • ontology — The AI RMF's taxonomy of trustworthiness characteristics, risk categories, AI actor roles, and lifecycle stages constitutes the AI governance ontology. Consistent classification enables cross-framework mapping (NIST → EU AI Act → ISO 42001), automated compliance checking, and audit trail integrity. The official NIST crosswalks [cite:494] operationalise this ontological mapping
  • system_prompt — For AI systems where prompt configurations influence behaviour (LLMs, generative AI, agent-based systems), prompt design is an AI RMF governance artefact. MAP 1 (context) must document prompt design intent; MEASURE 2 must evaluate prompt-influenced outputs for trustworthiness; MANAGE 1 must treat prompt-related risks. AI 600-1 specifically addresses confabulation and information integrity risks that are heavily prompt-influenced [cite:488][cite:509]
  • gate — The AI RMF creates lifecycle gates: risk assessment before development (MAP), trustworthiness evaluation before deployment (MEASURE), incident response during operation (MANAGE), and governance review at all stages (GOVERN). Colorado AI Act compliance and federal procurement alignment add regulatory gates requiring documented AI RMF conformance [cite:512][cite:515]
  • security — NIST AI RMF characteristic "Secure and resilient" is a core trustworthiness requirement. AI 600-1 identifies information security (data poisoning, prompt injection, model extraction) as a GAI-specific risk. AI security maps to NIST CSF 2.0 and ISO 27001, both of which have official crosswalks to the AI RMF [cite:494][cite:513]
  • signed_client — AI RMF implementation documentation, risk assessment records, trustworthiness evaluation results, and incident response records must be authenticated and traceable. These records constitute the defensibility evidence in litigation and regulatory inquiry. Colorado AI Act affirmative defense requires demonstrable compliance records [cite:515][cite:510]

E/A/D Axis Integration

The NIST AI RMF maps directly to the E/A/D (Ethical/Accountable/Defensible) governance axes:

E/A/D AxisAI RMF FunctionsTrustworthiness CharacteristicsEvidence
Ethical (E)GOVERN 3–5, MAP 5, MEASURE 2Fair with harmful bias managed; privacy-enhanced; safeDEIA policies, stakeholder engagement records, impact assessments, fairness evaluations
Accountable (A)GOVERN 1–2, MAP 1–4, MEASURE 1+3, MANAGE 1+3Accountable and transparent; valid and reliablePolicies, accountability structures, risk registers, metrics, third-party due diligence
Defensible (D)All functions — documented implementationExplainable and interpretable; accountable and transparentComplete audit trail: governance policies, risk maps, measurement records, treatment decisions, incident responses, improvement actions

NIST AI RMF - Enforcement Penalties

The AI RMF is a voluntary framework — it does not impose penalties directly. However, it is increasingly referenced as the enforcement benchmark through multiple legal and regulatory channels [cite:494][cite:510].

Enforcement Through Reference

ChannelEnforcement Mechanism
Federal procurementOMB guidance following EO 14110 incorporates AI minimum risk standards into federal contracts; non-compliance may disqualify vendors [cite:500][cite:491]
Colorado AI Act (SB 24-205)Effective February 1, 2026; affirmative defense available for NIST AI RMF or ISO 42001 compliance; enforcement by Colorado AG; non-compliance exposes organisations to liability for algorithmic discrimination [cite:512][cite:515][cite:516]
Other state legislationTexas, Connecticut, and other states reference NIST AI RMF as a safe harbor or recognised framework; trend accelerating [cite:510]
Litigation defensibilityNIST AI RMF compliance is increasingly cited as evidence of reasonable care in AI-related litigation; absence of AI risk management framework may be used to establish negligence [cite:510][cite:491]
DOJ ECCPThe 2024 DOJ update requires companies to assess and manage AI-related compliance risks; NIST AI RMF provides the recognised methodology [cite:341][cite:345]
Sector-specific regulatorsFDA, SEC, OCC, and other agencies reference NIST standards in AI guidance; non-alignment may attract enhanced scrutiny [cite:491]

Penalty Exposure for Non-Alignment

While no "NIST AI RMF fine" exists, organisations that fail to implement recognised AI risk management face [cite:510][cite:512]:

  • Loss of federal contract eligibility
  • Liability for algorithmic discrimination under state laws (Colorado, others pending)
  • Increased exposure in tort litigation (negligence, product liability)
  • Regulatory enforcement actions from sector-specific regulators
  • Reputational damage from AI incidents without documented governance
  • Criminal exposure under DOJ ECCP for AI-related compliance failures [cite:341]

NIST AI RMF - Intersection With Other Frameworks

NIST has published official crosswalks mapping the AI RMF to major international AI governance frameworks [cite:494].

Official NIST Crosswalks

FrameworkCrosswalk StatusKey Alignment
EU AI ActOfficial NIST crosswalk published [cite:494]Risk categorisation, conformity assessment, trustworthiness requirements
ISO/IEC 42001:2023Official NIST crosswalk published [cite:494]Management system requirements map to GOVERN; PDCA cycle maps to Map/Measure/Manage
ISO/IEC 23894Official NIST crosswalk published [cite:494]AI risk management guidance aligned with ISO 31000
OECD AI PrinciplesOfficial NIST crosswalk published [cite:494]Five principles (inclusive growth, human-centred values, transparency, robustness, accountability) map to AI RMF characteristics
Singapore AI Governance FrameworkOfficial NIST crosswalk published [cite:494]Risk-based approach with human oversight requirements

Integration With Broader Governance Frameworks

FrameworkIntegration Point
NIST CSF 2.0Cybersecurity risk management for AI systems; "Secure and resilient" characteristic maps to CSF Identify/Protect/Detect/Respond/Recover [cite:474]
ISO 27001Information security controls protecting AI systems, data, and outputs [cite:474]
COSO ERMEnterprise risk management umbrella encompassing AI risk as a risk category
DOJ ECCPAI risk governance evaluations reference NIST-aligned frameworks; GOVERN maps to ECCP compliance programme design [cite:341]
CMMC / NIST 800-171Defence contractor AI deployments require both cybersecurity (CMMC) and AI risk management (AI RMF) alignment [cite:500]
SOC 2Trust services criteria (security, availability, processing integrity, confidentiality, privacy) overlap with AI RMF trustworthiness characteristics [cite:519]
HIPAAAI systems processing PHI must address both HIPAA security/privacy requirements and AI RMF trustworthiness characteristics [cite:491]

Colorado AI Act Dual-Framework Approach

The Colorado AI Act explicitly recognises both NIST AI RMF and ISO 42001 as bases for the affirmative defense [cite:512][cite:519]:

  • ISO 42001 provides the certifiable governance system: policies, roles, risk and impact assessments, documentation, monitoring, and continual improvement
  • NIST AI RMF provides the practical risk management depth: risk mapping, bias mitigation, transparency, monitoring, and incident response
  • Used together, they create a comprehensive, defensible AI governance programme that satisfies both the certifiable management system requirement and the practical risk management requirement [cite:519]

NIST AI RMF - Recent Updates

AI RMF 1.0 (January 2023)

The initial release established the four-function architecture (Govern/Map/Measure/Manage), seven trustworthiness characteristics, and the AI actor framework. Published pursuant to the National Artificial Intelligence Initiative Act of 2020 [cite:493][cite:494].

AI RMF Playbook (January 2023 + ongoing updates)

The Playbook provides suggested actions for achieving the outcomes in each AI RMF subcategory [cite:499]:

  • Aligned to each subcategory within all four functions
  • Voluntary — organisations select applicable suggestions based on their use case
  • Continuously updated with new suggested actions and community contributions
  • Not a checklist — designed for flexible, risk-proportionate implementation

NIST AI 600-1: Generative AI Profile (July 2024)

Released pursuant to EO 14110, the GenAI Profile [cite:488][cite:496]:

  • Identifies 12 risks unique to or exacerbated by generative AI
  • Provides ~200 suggested actions mapped to AI RMF subcategories
  • Addresses risks from confabulation through CBRN information access
  • Applies to all organisations designing, developing, deploying, or using GAI systems

Executive Order 14110 (October 2023)

EO 14110 on Safe, Secure, and Trustworthy Artificial Intelligence [cite:500][cite:488]:

  • Directed NIST to develop the AI 600-1 GenAI Profile
  • Required federal agencies to implement AI risk management aligned with NIST standards
  • Directed OMB to develop AI minimum risk standards for federal procurement
  • Positioned NIST AI RMF as the foundational federal AI governance standard

Note: While the current administration's policy direction on AI may evolve, the NIST AI RMF itself remains a published technical standard under the Department of Commerce. Federal procurement requirements and state legislation referencing NIST AI RMF continue to be operative [cite:510][cite:491].

Colorado AI Act — Effective February 1, 2026

The first comprehensive U.S. state AI law [cite:512][cite:516]:

  • Requires developers and deployers of high-risk AI systems to prevent algorithmic discrimination
  • Risk management policy must specify processes and personnel for identifying and mitigating algorithmic discrimination
  • Affirmative defense for organisations that (1) discover and cure violations through internal testing or red-teaming, and (2) otherwise comply with the NIST AI RMF or another nationally/internationally recognised risk management framework [cite:515]
  • Enforced by Colorado Attorney General; rulemaking authority for specific compliance requirements [cite:516]
  • Not limited to Colorado-based organisations — applies to any "person doing business in the state" [cite:516]

Official Crosswalks (December 2024)

NIST published multiple crosswalks mapping the AI RMF to international frameworks, enabling organisations to "govern once, comply many" — implementing a single AI risk management programme that demonstrates alignment with multiple regulatory requirements simultaneously [cite:494][cite:474].

Evolving Landscape (2025–2026)

  • Additional state AI laws referencing NIST AI RMF are expected or enacted in Texas, Connecticut, and other states [cite:510]
  • Federal procurement AI requirements continue to mature through OMB and agency-specific guidance [cite:500][cite:491]
  • NIST AI RMF maturity models are emerging to help organisations assess their implementation progress against the framework [cite:480]
  • AI 600-1 updates and additional domain-specific profiles (healthcare, financial services, critical infrastructure) are anticipated [cite:496]
  • The framework is increasingly adopted internationally, with alignment to ISO 42001 and EU AI Act requirements creating a converging global AI governance standard