AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

← Directory

OWASP Top 10 for Large Language Model Applications

OWASP LLM Top 10 · OWASP Foundation

The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.

Overview

The OWASP Top 10 for LLM Applications is a community-driven framework that catalogues the most critical security vulnerabilities affecting systems built on large language models. First published in 2023 and updated in 2025, it covers prompt injection (where malicious inputs override intended model behavior), insecure output handling (where model outputs are trusted without sanitization), training data poisoning, model denial of service, supply chain vulnerabilities in third-party models and components, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft. Each entry describes the risk, example attack scenarios, and mitigation approaches. The framework is designed to be used alongside existing application security practices and is particularly relevant for organizations deploying agentic AI systems.

Key Requirements

  • Implement input validation and output sanitization for all LLM interactions
  • Establish access controls limiting what actions LLM-powered systems can take
  • Monitor for prompt injection and anomalous model behavior in production
  • Apply least-privilege principles to AI agent permissions
  • Conduct supply chain due diligence on third-party models and plugins

Who It Affects

AI developerSecurity teamEnterprise deployer

Effective Date

2025-01-01

Official source →