SECURE Owner: Security Operations / Red Team / AppSec Engineers

AI-Specific Security Controls

Implement AI-specific threat detection, vulnerability management, and adversarial resilience across all AI systems and agent architectures.

Framework Mapping

Controls from each source framework that map to this domain.

Framework Mapped Controls
ISO 42001
A.8 Data for AI Systems A.9 Technology & Tools for AI Systems
NIST AI RMF
AI 600-1 Generative AI Profile
OWASP
LLM04 Model DoS LLM06 Sensitive Info Disclosure LLM07 Insecure Plugin Design LLM08 Excessive Agency LLM10 Model Theft ASI01 Excessive Permissions ASI03 Resource Exhaustion ASI06 Unmonitored Actions ASI07 Uncontrolled Cascading ASI10 Misplaced Trust

Audit Checklist

Quick-reference checklist items grouped by control.

  • Threat model exists for each production AI system
  • Architecture diagrams included with trust boundaries marked
  • Threats enumerated using structured framework (STRIDE, OWASP)
  • High-risk threats have documented mitigations and implementation status
  • Threat models reviewed within last 12 months or after major architecture changes
  • Input validation implemented blocking known injection patterns
  • Context isolation demonstrated with delimiters or structured message roles
  • Agent privileges limited to minimum required for functionality
  • Output filtering configured to detect/block sensitive data leakage
  • Adversarial testing conducted with documented results and remediation
  • Data classification policy defines handling rules for each sensitivity level
  • Input redaction implemented and tested for common sensitive patterns
  • Output filtering configured to detect leaked secrets in LLM responses
  • DLP integration active for high-risk systems with alerting enabled
  • Data flow audits conducted quarterly with findings remediated
  • Permission boundaries documented for each agent with tool access
  • Tool allowlists explicitly configured and deny-by-default enforced
  • File system restrictions implemented preventing access to sensitive paths
  • Network egress controls configured with domain/IP allowlists
  • Permission enforcement validated via adversarial testing or security review
  • All AI API keys stored in secrets manager, none hardcoded
  • Key rotation policy defined (quarterly) with evidence of recent rotations
  • Usage monitoring configured with logging and alerting
  • Revocation process documented with 1-hour RTO
  • Access controls enforce least privilege (IAM roles, scoped permissions)
  • Red team playbook exists with AI-specific attack scenarios
  • Red team engagement conducted within last 12 months
  • Report documents findings with severity ratings and evidence
  • High/critical findings remediated and validated before production
  • Medium/low findings tracked in backlog with target fix dates
  • Input validation rules defined and enforced for all user-controllable inputs
  • Output escaping implemented for all rendering contexts (HTML, JSON, etc.)
  • Secret detection configured to scan outputs before delivery
  • Test suite validates sanitization (injection blocked, XSS prevented)
  • Monitoring alerts configured for sanitization failures with alert history
  • Agency policy defines high-risk actions requiring human approval
  • Human approval gates implemented and validated via testing
  • Iteration limits and timeouts configured and enforced
  • Runaway detection rules active with alerting configured
  • Logs show limits enforced (agent paused/aborted) with no runaway incidents