NIST AI 600-1 Generative AI Profile
Issued by
National Institute of Standards and Technology (NIST), U.S. Department of Commerce
A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.
Applies To
Overview
NIST AI 600-1, formally titled 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,' was published in final form on July 26, 2024, in response to Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023). The profile extends the NIST AI Risk Management Framework (AI RMF 1.0, January 2023) by mapping its GOVERN, MAP, MEASURE, and MANAGE functions to twelve generative-AI-specific risk categories. These categories reflect emergent risks that were not fully addressed by the base AI RMF, including data provenance challenges, hallucination and confabulation, intellectual property concerns, homogenization of outputs, and risks arising from the dual-use nature of generative AI capabilities. NIST AI 600-1 is structured as a profile-a prioritised set of outcomes drawn from the AI RMF Core-rather than a standalone standard or regulation. It is intended for use by AI developers, deployers, and operators and is explicitly technology-stack-neutral. The document was developed through an open public comment process and reflects input from industry, academia, civil society, and international partners. Federal agencies are expected to use the profile as part of their implementation of the requirements under EO 14110 and subsequent OMB guidance. While voluntary for private-sector organisations, the profile is increasingly referenced in federal procurement solicitations, sector-specific regulatory guidance, and enterprise AI governance programmes. Enterprises building, fine-tuning, or deploying generative AI systems-including retrieval-augmented generation (RAG) architectures, copilot tools, and AI-assisted decision systems-should map internal controls to the twelve risk categories and associated suggested actions.
Key Requirements
- •Governance: Establish organisational policies, accountability structures, and workforce competencies specific to generative AI risk.
- •CBRN Information: Implement controls to prevent generative AI from providing serious uplift to those seeking to create chemical, biological, radiological, or nuclear threats.
- •Confabulation (Hallucination): Measure, monitor, and disclose rates of confabulation; implement output validation controls.
- •Data Privacy: Address privacy risks inherent in training data ingestion, memorisation, and inference outputs.
- •Data Provenance: Maintain records of training data lineage and apply provenance verification mechanisms where feasible.
- •Harmful Bias and Homogenization: Evaluate and mitigate bias amplification and the societal risk of large-scale output homogenization.
- •Human-AI Configuration: Ensure appropriate human oversight mechanisms are calibrated to the risk level of generative AI applications.
- •Information Integrity: Address the potential for generative AI to produce or amplify disinformation, synthetic media, and influence operations.
- •Information Security: Manage risks of adversarial prompt injection, data poisoning, model extraction, and evasion attacks.
- •Intellectual Property: Assess and manage risks of training data copyright infringement and IP exposure in model outputs.
- •Obscene or Abusive Content: Implement content moderation controls for sexual, violent, or abusive content generation.
- •Transparency and Documentation: Maintain model cards, system cards, and disclosure documentation appropriate to the deployment context.
- •Value Chain and Component Integration: Assess third-party model and component risks across the generative AI supply chain.
What Your Organization Must Do
- →Map all generative AI systems currently in development or deployment to the twelve NIST AI 600-1 risk categories by assigning a responsible owner (e.g., AI risk lead or product owner) for each category and documenting gaps against the suggested actions in the profile.
- →Update the enterprise AI governance policy to explicitly address generative AI risks, including hallucination thresholds, CBRN safeguards, and content moderation standards, and present the revised policy to the board or risk committee for approval within 90 days of system deployment or profile adoption.
- →Establish a confabulation and output quality monitoring programme with defined measurement cadences (at minimum quarterly), documented acceptable error rate thresholds, and a disclosure protocol for material confabulation incidents affecting end users or regulated decisions.
- →Require all third-party generative AI vendors and foundation model providers to supply model cards, system cards, and training data provenance documentation as a contractual condition of procurement; assign third-party risk management to review these against profile requirements before contract execution.
- →Conduct a training data and output intellectual property review for any model being fine-tuned or deployed, engaging legal counsel to assess copyright exposure, and implement output filtering or attribution controls where infringement risk is identified.
- →Align internal generative AI security controls with the profile's information security requirements by tasking the cybersecurity team to test for prompt injection, data poisoning, and model extraction vulnerabilities on a defined schedule (at minimum annually or after significant model updates), and report findings to the AI risk governance function.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Is NIST AI 600-1 mandatory for private companies?
- No. NIST AI 600-1 is voluntary for private-sector organizations. However, it is increasingly referenced in federal procurement solicitations, so companies selling generative AI solutions to U.S. government agencies may face contractual pressure to align with its requirements.
- How does NIST AI 600-1 differ from the NIST AI RMF 1.0?
- The AI RMF 1.0 is a general-purpose AI risk management framework. NIST AI 600-1 is a companion profile that extends it by mapping the same GOVERN, MAP, MEASURE, and MANAGE functions to twelve risk categories specific to generative AI systems, such as hallucination, data provenance, and prompt injection.
- Which twelve risk categories does NIST AI 600-1 address?
- The profile covers CBRN information risks, confabulation, data privacy, data provenance, harmful bias and homogenization, human-AI configuration, information integrity, information security, intellectual property, obscene or abusive content, transparency and documentation, and value chain and component integration.
- Do federal agencies have to comply with NIST AI 600-1?
- Federal agencies are expected to use NIST AI 600-1 as part of their implementation of Executive Order 14110 and subsequent OMB guidance on AI governance. It carries a stronger compliance expectation for agencies than for private-sector organizations.
- What does NIST AI 600-1 require for third-party generative AI vendors?
- The profile calls for organizations to assess third-party model and component risks across the generative AI supply chain. In practice, this means requiring vendors to provide model cards, system cards, and training data provenance documentation, and reviewing those materials against profile requirements before contract execution.
- How should a compliance team handle hallucination risk under NIST AI 600-1?
- The profile requires organizations to measure and monitor confabulation rates, set acceptable error thresholds, implement output validation controls, and establish a disclosure protocol for material confabulation incidents. A minimum quarterly measurement cadence is a practical baseline for most deployment contexts.
