NIST AI 600-1 Generative AI Profile
NIST AI 600-1 · National Institute of Standards and Technology (NIST), U.S. Department of Commerce
A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.
Overview
NIST AI 600-1, formally titled 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,' was published in final form on July 26, 2024, in response to Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023). The profile extends the NIST AI Risk Management Framework (AI RMF 1.0, January 2023) by mapping its GOVERN, MAP, MEASURE, and MANAGE functions to twelve generative-AI-specific risk categories. These categories reflect emergent risks that were not fully addressed by the base AI RMF, including data provenance challenges, hallucination and confabulation, intellectual property concerns, homogenization of outputs, and risks arising from the dual-use nature of generative AI capabilities. NIST AI 600-1 is structured as a profile-a prioritised set of outcomes drawn from the AI RMF Core-rather than a standalone standard or regulation. It is intended for use by AI developers, deployers, and operators and is explicitly technology-stack-neutral. The document was developed through an open public comment process and reflects input from industry, academia, civil society, and international partners. Federal agencies are expected to use the profile as part of their implementation of the requirements under EO 14110 and subsequent OMB guidance. While voluntary for private-sector organisations, the profile is increasingly referenced in federal procurement solicitations, sector-specific regulatory guidance, and enterprise AI governance programmes. Enterprises building, fine-tuning, or deploying generative AI systems-including retrieval-augmented generation (RAG) architectures, copilot tools, and AI-assisted decision systems-should map internal controls to the twelve risk categories and associated suggested actions.
Key Requirements
- •Governance: Establish organisational policies, accountability structures, and workforce competencies specific to generative AI risk.
- •CBRN Information: Implement controls to prevent generative AI from providing serious uplift to those seeking to create chemical, biological, radiological, or nuclear threats.
- •Confabulation (Hallucination): Measure, monitor, and disclose rates of confabulation; implement output validation controls.
- •Data Privacy: Address privacy risks inherent in training data ingestion, memorisation, and inference outputs.
- •Data Provenance: Maintain records of training data lineage and apply provenance verification mechanisms where feasible.
- •Harmful Bias and Homogenization: Evaluate and mitigate bias amplification and the societal risk of large-scale output homogenization.
- •Human-AI Configuration: Ensure appropriate human oversight mechanisms are calibrated to the risk level of generative AI applications.
- •Information Integrity: Address the potential for generative AI to produce or amplify disinformation, synthetic media, and influence operations.
- •Information Security: Manage risks of adversarial prompt injection, data poisoning, model extraction, and evasion attacks.
- •Intellectual Property: Assess and manage risks of training data copyright infringement and IP exposure in model outputs.
- •Obscene or Abusive Content: Implement content moderation controls for sexual, violent, or abusive content generation.
- •Transparency and Documentation: Maintain model cards, system cards, and disclosure documentation appropriate to the deployment context.
- •Value Chain and Component Integration: Assess third-party model and component risks across the generative AI supply chain.
Who It Affects
Effective Date
2024-07-26