AI Risk Appetite Statement
Purpose
Documents organizational AI risk tolerance across key categories with thresholds and approval authorities.
Related Controls
1. Executive Summary
Provide a high-level statement of the organization's overall risk appetite for AI adoption.
[ORGANIZATION NAME] embraces a moderate risk appetite for AI adoption, balancing innovation with responsible governance. We accept calculated risks where AI creates measurable business value, provided adequate controls are in place and risks to individuals, data, and operations remain within defined thresholds.
This statement was approved by [EXECUTIVE TITLE] on [DATE] and will be reviewed annually or following any AI-related incident classified as Severity 1 or 2.
Statement Owner: [ROLE TITLE], [DEPARTMENT]
2. Risk Categories & Tolerance Levels
Define 4-6 risk domains with concrete low/medium/high thresholds and examples.
| Risk Category | Low (Accept) | Medium (Mitigate) | High (Escalate/Avoid) |
|---|---|---|---|
| Data Exposure | Public data only; no PII | Internal data with DPA in place; pseudonymized PII | Regulated data (HIPAA/PCI); unencrypted PII; cross-border transfers |
| Bias & Fairness | Non-decision content generation | Recommendations with human review | Automated decisions affecting individuals (hiring, credit, access) |
| Security | Read-only AI tools; no system access | AI with controlled API access; sandboxed execution | AI with production system access; autonomous actions; credential access |
| Availability | AI enhances non-critical workflow | AI integrated into business process with manual fallback | AI is sole path for critical business function; no manual override |
| Compliance | No regulatory requirements apply | Industry standards apply (SOC 2, ISO 27001) | Direct regulatory requirements (EU AI Act, sector-specific) |
| Reputation | Internal use only | Customer-facing with human oversight | Autonomous customer interactions; public-facing decisions |
3. Approval Authority Matrix
Map risk levels to decision-making authority and mandatory controls.
| Risk Level | Approval Authority | Mandatory Controls | Review Frequency |
|---|---|---|---|
| Low | Team Lead / Manager | Policy acknowledgment, approved tool list | Annual |
| Medium | Director + Security Review | Risk assessment, data classification, vendor review, monitoring | Semi-annual |
| High | Executive Committee + CISO + Legal | Full impact assessment, red team testing, incident response plan, legal review, board notification | Quarterly |
4. Risk Scoring Methodology
Provide a simple scoring rubric teams can use to self-assess AI risk.
Scoring Formula
Risk Score = Impact (1-5) x Likelihood (1-5)
Impact Scale
- 1 — Negligible: No measurable harm; internal convenience impact only
- 2 — Minor: Limited operational disruption; no data exposure; easily reversible
- 3 — Moderate: Business process disruption; potential internal data exposure; requires investigation
- 4 — Major: Customer impact; regulatory notification potential; significant remediation required
- 5 — Critical: Regulatory enforcement action; significant financial loss; reputational damage; harm to individuals
Likelihood Scale
- 1 — Rare: Less than 1% chance in 12 months
- 2 — Unlikely: 1-10% chance in 12 months
- 3 — Possible: 10-50% chance in 12 months
- 4 — Likely: 50-90% chance in 12 months
- 5 — Almost Certain: Greater than 90% chance in 12 months
Score Mapping
- 1-6: Low Risk — Accept with standard controls
- 7-14: Medium Risk — Mitigate with enhanced controls
- 15-25: High Risk — Escalate to executive committee; avoid unless compelling business case with exceptional controls
5. Review & Calibration
Define how the risk appetite is reviewed and adjusted over time.
Review Triggers
- Annual scheduled review (minimum)
- Following any Severity 1 or 2 AI incident
- Significant change in regulatory landscape (new AI legislation)
- Major organizational change (M&A, new business line)
- Quarterly review of approval decision patterns
Calibration Process
- Collect data: Review all AI risk assessments, incidents, and exception requests from the period
- Analyze patterns: Identify where thresholds were frequently exceeded or where exceptions cluster
- Benchmark: Compare against industry standards and peer organizations
- Adjust: Propose threshold changes to executive committee with supporting rationale
- Communicate: Publish updated statement and notify all stakeholders
Decision Log
All risk acceptance decisions must be logged in the AI Risk Register with: date, system name, risk score, decision, approver, conditions, and next review date.