AI Risk Appetite Statement

Statement GOVERN

Purpose

Documents organizational AI risk tolerance across key categories with thresholds and approval authorities.

Related Controls

ISO Clause 6 NIST GV-1

1. Executive Summary

Provide a high-level statement of the organization's overall risk appetite for AI adoption.

[ORGANIZATION NAME] embraces a moderate risk appetite for AI adoption, balancing innovation with responsible governance. We accept calculated risks where AI creates measurable business value, provided adequate controls are in place and risks to individuals, data, and operations remain within defined thresholds.

This statement was approved by [EXECUTIVE TITLE] on [DATE] and will be reviewed annually or following any AI-related incident classified as Severity 1 or 2.

Statement Owner: [ROLE TITLE], [DEPARTMENT]

2. Risk Categories & Tolerance Levels

Define 4-6 risk domains with concrete low/medium/high thresholds and examples.

Risk CategoryLow (Accept)Medium (Mitigate)High (Escalate/Avoid)
Data ExposurePublic data only; no PIIInternal data with DPA in place; pseudonymized PIIRegulated data (HIPAA/PCI); unencrypted PII; cross-border transfers
Bias & FairnessNon-decision content generationRecommendations with human reviewAutomated decisions affecting individuals (hiring, credit, access)
SecurityRead-only AI tools; no system accessAI with controlled API access; sandboxed executionAI with production system access; autonomous actions; credential access
AvailabilityAI enhances non-critical workflowAI integrated into business process with manual fallbackAI is sole path for critical business function; no manual override
ComplianceNo regulatory requirements applyIndustry standards apply (SOC 2, ISO 27001)Direct regulatory requirements (EU AI Act, sector-specific)
ReputationInternal use onlyCustomer-facing with human oversightAutonomous customer interactions; public-facing decisions

3. Approval Authority Matrix

Map risk levels to decision-making authority and mandatory controls.

Risk LevelApproval AuthorityMandatory ControlsReview Frequency
LowTeam Lead / ManagerPolicy acknowledgment, approved tool listAnnual
MediumDirector + Security ReviewRisk assessment, data classification, vendor review, monitoringSemi-annual
HighExecutive Committee + CISO + LegalFull impact assessment, red team testing, incident response plan, legal review, board notificationQuarterly

4. Risk Scoring Methodology

Provide a simple scoring rubric teams can use to self-assess AI risk.

Scoring Formula

Risk Score = Impact (1-5) x Likelihood (1-5)

Impact Scale

  • 1 — Negligible: No measurable harm; internal convenience impact only
  • 2 — Minor: Limited operational disruption; no data exposure; easily reversible
  • 3 — Moderate: Business process disruption; potential internal data exposure; requires investigation
  • 4 — Major: Customer impact; regulatory notification potential; significant remediation required
  • 5 — Critical: Regulatory enforcement action; significant financial loss; reputational damage; harm to individuals

Likelihood Scale

  • 1 — Rare: Less than 1% chance in 12 months
  • 2 — Unlikely: 1-10% chance in 12 months
  • 3 — Possible: 10-50% chance in 12 months
  • 4 — Likely: 50-90% chance in 12 months
  • 5 — Almost Certain: Greater than 90% chance in 12 months

Score Mapping

  • 1-6: Low Risk — Accept with standard controls
  • 7-14: Medium Risk — Mitigate with enhanced controls
  • 15-25: High Risk — Escalate to executive committee; avoid unless compelling business case with exceptional controls

5. Review & Calibration

Define how the risk appetite is reviewed and adjusted over time.

Review Triggers

  • Annual scheduled review (minimum)
  • Following any Severity 1 or 2 AI incident
  • Significant change in regulatory landscape (new AI legislation)
  • Major organizational change (M&A, new business line)
  • Quarterly review of approval decision patterns

Calibration Process

  1. Collect data: Review all AI risk assessments, incidents, and exception requests from the period
  2. Analyze patterns: Identify where thresholds were frequently exceeded or where exceptions cluster
  3. Benchmark: Compare against industry standards and peer organizations
  4. Adjust: Propose threshold changes to executive committee with supporting rationale
  5. Communicate: Publish updated statement and notify all stakeholders

Decision Log

All risk acceptance decisions must be logged in the AI Risk Register with: date, system name, risk score, decision, approver, conditions, and next review date.

← Back to all templates