OWASP Top 10 for Large Language Model Applications
Issued by
OWASP Foundation
The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.
Applies To
Overview
The OWASP Top 10 for LLM Applications is a community-driven framework that catalogues the most critical security vulnerabilities affecting systems built on large language models. First published in 2023 and updated in 2025, it covers prompt injection (where malicious inputs override intended model behavior), insecure output handling (where model outputs are trusted without sanitization), training data poisoning, model denial of service, supply chain vulnerabilities in third-party models and components, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft. Each entry describes the risk, example attack scenarios, and mitigation approaches. The framework is designed to be used alongside existing application security practices and is particularly relevant for organizations deploying agentic AI systems.
Key Requirements
- •Implement input validation and output sanitization for all LLM interactions
- •Establish access controls limiting what actions LLM-powered systems can take
- •Monitor for prompt injection and anomalous model behavior in production
- •Apply least-privilege principles to AI agent permissions
- •Conduct supply chain due diligence on third-party models and plugins
What Your Organization Must Do
- →Assign the security team lead to map all LLM-powered applications against each of the OWASP Top 10 LLM categories and document findings in a risk register by the end of the current quarter.
- →Require development teams to implement input validation and output sanitization controls for every LLM integration before deployment, using code review checkpoints to verify compliance.
- →Enforce least-privilege permissions for all AI agents and plugins, ensuring no LLM-powered component has access to systems or data beyond its defined operational scope, reviewed at each release cycle.
- →Establish automated monitoring in production environments to detect prompt injection attempts and anomalous model outputs, with alerts routed to the security operations center for triage within 24 hours.
- →Conduct supply chain due diligence on all third-party models, APIs, and plugins prior to adoption, including a documented review of provenance, update history, and known vulnerabilities before go-live approval.
- →Schedule a recurring review of LLM security controls at least annually or upon any major model or architecture change to incorporate updates from future OWASP Top 10 revisions.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Is the OWASP LLM Top 10 a mandatory compliance requirement or a voluntary framework?
- It is voluntary. The OWASP LLM Top 10 is a community-driven guidance framework with no regulatory enforcement mechanism. However, it is frequently referenced in procurement requirements, security audits, and regulatory guidance, so alignment is increasingly expected in enterprise and regulated environments.
- What is the difference between the 2023 and 2025 versions of the OWASP LLM Top 10?
- The 2025 update reflects the expanded use of agentic AI systems and multi-model architectures. It refines existing risk categories and adds greater emphasis on supply chain vulnerabilities and excessive agency, where LLM-powered agents take actions beyond their intended scope.
- How does prompt injection in the OWASP LLM Top 10 differ from traditional SQL injection?
- Prompt injection manipulates natural language inputs to override an LLM's intended behavior or system instructions, rather than exploiting a structured query parser. It is harder to detect with conventional input validation because the attack surface is unstructured text rather than a defined syntax.
- Which OWASP LLM Top 10 risk is most relevant for companies deploying AI agents with tool access?
- Excessive agency is the most directly relevant risk. It covers scenarios where an LLM-powered agent has permissions or capabilities beyond what its task requires, increasing the blast radius of a compromised or manipulated model. Least-privilege principles and scoped tool access are the primary mitigations.
- Does the OWASP LLM Top 10 apply to companies using third-party LLM APIs rather than self-hosted models?
- Yes. Several risks in the framework, including supply chain vulnerabilities, sensitive information disclosure, and insecure output handling, apply directly to organizations consuming third-party LLM APIs. The framework does not require model ownership to be relevant.
- How should compliance teams use the OWASP LLM Top 10 alongside existing frameworks like NIST AI RMF or ISO 42001?
- The OWASP LLM Top 10 is a technical security reference that complements higher-level risk and governance frameworks. It is best used to operationalize controls within the risk treatment and testing phases of NIST AI RMF or ISO 42001 implementation, particularly for application-layer security.
