OWASP LLM Top 10 - AI Security Risks

AI Security Risk Framework and Guidelines O Cloud Infrastructure

Basic Information

Project Description

The OWASP GenAI Security Project is a global open-source initiative dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including LLMs, agentic AI systems, and AI-driven applications. The 2025 edition has undergone significant updates based on real-world incidents, emerging attack techniques, and feedback from the rapid growth of agentic AI. It introduces two new categories, substantially revises multiple entries, and reorders them based on community feedback.

OWASP LLM Top 10 (2025 Edition)

LLM01: Prompt Injection

Maintained its top position for two consecutive editions. LLMs process instructions and data in the same channel without clear separation, allowing attackers to craft inputs that the model interprets as new instructions.

LLM02: Sensitive Information Disclosure

LLMs may expose sensitive information from training data in their outputs.

LLM03: Supply Chain Vulnerabilities

Security risks arising from reliance on third-party models, data, and plugins.

LLM04: Data and Model Poisoning

Risks caused by malicious modifications to training data or models.

LLM05: Inadequate Output Handling

Risks resulting from the direct use of LLM outputs without proper validation.

LLM06: Excessive Agency

Three root causes: excessive functionality (agents accessing tools beyond their task scope), excessive permissions (tools running with more permissions than necessary), and excessive autonomy (high-impact operations without human review).

LLM07: System Prompt Leakage (New)

Vulnerabilities arising from the exposure of system prompts, leading to potential exploitation.

LLM08: Vector and Embedding Weaknesses (New)

Provides guidance for securing RAG and embedding-based methods.

LLM09: Misinformation

LLMs generating inaccurate or misleading content.

LLM10: Unbounded Consumption

Expanded from "Denial of Service" to broader resource management and unexpected cost issues.

Related OWASP Resources

  • Gen AI Red Teaming Guide (Released in January 2025)
  • Top 10 for Agentic Applications (Released in December 2025, targeting 2026)
  • OWASP AI Security and Privacy Guide

Target Audience

  • AI Application Developers
  • AI Security Engineers
  • Compliance and Risk Management Teams
  • CISOs and Security Leaders
  • AI Product Managers

Industry Impact

  • Nearly all AI security tools (e.g., Promptfoo, Garak) map to the OWASP LLM Top 10
  • Full compliance with the EU AI Act required by August 2026
  • NIST, MITRE ATLAS, and OWASP form the three authoritative frameworks for AI security

Comparison with Competitors/Frameworks

DimensionOWASP LLM Top 10NIST AI RMFMITRE ATLAS
TypeVulnerability ListRisk Management FrameworkAttack Knowledge Base
CoverageLLM-SpecificGeneral AIAI Attack Techniques
OperabilityHighMedium (Framework-Level)High (Technical-Level)
Update FrequencyAnnualRegularContinuous
Industry AdoptionWidestGovernment/EnterpriseSecurity Research

Relationship with the OpenClaw Ecosystem

The OWASP LLM Top 10 serves as the cornerstone framework for AI security strategies within the OpenClaw ecosystem. When designing and deploying AI agents, OpenClaw should systematically evaluate and mitigate each risk category outlined in the OWASP Top 10. Particularly relevant to OpenClaw's AI agent systems are Prompt Injection (LLM01), Excessive Agency (LLM06), and System Prompt Leakage (LLM07). The Top 10 for Agentic Applications, to be released in December 2025, provides specialized guidance for the security of OpenClaw's agent architecture. It is recommended to use the OWASP LLM Top 10 as a standard checklist for OpenClaw security audits.