Agentic AI and Autonomy
As AI systems move from answering questions to taking independent actions, traditional governance frameworks are falling behind. This topic covers what organizations need to govern AI agents: autonomy boundaries, non-human identities, permissions, audit trails, and incident response for systems that can act without direct human instruction.
Key board-level questions
- 1.Are we treating AI agents as digital employees with identities, permissions, and logs?
- 2.What decisions are AI systems allowed to make autonomously versus with human oversight?
- 3.How do we prevent runaway or irreversible actions from agentic systems?
- 4.Do we have governance for non-human identities and their access rights?
Regulatory frameworks
NIST AI 600-1 Generative AI Profile
A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.
OWASP Top 10 for Large Language Model Applications
The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.
EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)
The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.
NIST Artificial Intelligence Risk Management Framework Playbook
Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.
EU AI Office Framework
The institutional and regulatory framework establishing the European AI Office as the central EU body responsible for supervising general-purpose AI models, enforcing the EU AI Act at the supranational level, and coordinating AI governance across EU member states.
