OWASP LLM Top 10

The OWASP LLM Top 10 (2025 edition) is the foundational framework for AI application security. Know every item β€” the OSAI exam tests against this.

# Severity Name Description
01 Critical Prompt Injection Direct and indirect manipulation of LLM behavior via crafted inputs
02 High Insecure Output Handling Downstream component trusts LLM output β€” XSS, SQLi, RCE
03 High Training Data Poisoning Corrupting training data to embed backdoors or bias outputs
04 High Model Denial of Service Resource exhaustion via complex prompts, repetitive requests
05 High Supply Chain Vulnerabilities Malicious models, poisoned datasets, compromised integrations
06 High Sensitive Information Disclosure PII in training data regurgitated, system prompt exposure
07 High Insecure Plugin Design Plugins/tools with excessive permissions, missing input validation
08 Medium Excessive Agency AI given too much autonomy and access β€” blast radius of injection increases
09 Medium Overreliance Users/systems trust AI output without verification β€” enables social engineering
10 Medium Model Theft Extraction of proprietary models via query APIs

MITRE ATLAS Mapping

ATLAS Tactic Technique OSAI Relevance
Reconnaissance AML.T0002 - Search for Victim's AI Artifacts Find exposed models, APIs, training data
ML Attack Staging AML.T0010 - Create Proxy ML Model Model extraction for offline attacks
Initial Access AML.T0020 - Supply Chain Compromise Poisoned models, typosquat packages
Execution AML.T0051 - LLM Prompt Injection Direct and indirect injection
Persistence AML.T0019 - Backdoor ML Model Trigger-based backdoors in fine-tuned models
Exfiltration AML.T0040 - Exfiltrate via Traditional Channels Data out via agent tools
Impact AML.T0016 - Evade ML Model Jailbreaking safety classifiers