AI/LLM Threat Modeling
What This Requires
Conduct threat modeling for each AI system using STRIDE, OWASP Top 10 for LLM, or equivalent framework. Identify attack vectors, assess likelihood/impact, and define mitigations. Update annually or after architecture changes.
Why It Matters
AI systems introduce novel threats (prompt injection, model inversion, data poisoning) not addressed by traditional threat models. Proactive threat modeling prevents security-by-accident.
How To Implement
Choose Framework
Use STRIDE (Spoofing, Tampering, Repudiation, Info Disclosure, DoS, Elevation) or OWASP LLM Top 10. Tailor to AI-specific threats (prompt injection, training data poisoning, model theft).
Map Data Flows
Create architecture diagram showing: user input → API gateway → LLM → tools/databases → output. Identify trust boundaries (internet, corporate network, backend services).
Enumerate Threats
For each component, list threats. Examples: API (injection, DoS), LLM (jailbreak, data leakage), tools (SSRF, privilege escalation), output (XSS, over-reliance).
Assess & Mitigate
Score each threat (likelihood × impact). For high-risk threats, define mitigations (input validation, rate limiting, output sanitization). Track in risk register.
Evidence & Audit
- Threat model documents for all production AI systems
- Architecture diagrams with trust boundaries
- Threat enumeration using STRIDE or OWASP LLM Top 10
- Risk scores and mitigation plans
- Review/update records (annual or post-change)