Enterprise LLM Use Policy

Tier 2 GOVERN

What This Requires

Publish detailed guidance for employees using enterprise LLM tools (ChatGPT Enterprise, Copilot, Claude). Specify prohibited data inputs, required review steps, output validation, and citation requirements.

Why It Matters

Employees default to convenience over security without clear rules. This policy prevents accidental data leakage and over-reliance on unverified LLM outputs while enabling productivity gains.

How To Implement

Classify Data Handling Rules

Prohibit: customer PII, credentials, proprietary algorithms, regulated data (HIPAA, PCI). Allow: anonymized data, public documentation, internal code (if tool is enterprise/private). Require approval: financial data, legal contracts.

Output Validation Requirements

Mandate human review for: code merged to production, customer-facing content, financial reports. Define review depth (line-by-line vs. spot-check) based on risk.

Citation & Attribution

Require disclosure when LLM-generated content is used externally. For code, require comment header ("AI-assisted"). For documents, require footnote.

Approved Tools & Configuration

Publish list of approved enterprise tools with SSO integration. Block free/personal LLM accounts via DLP or firewall rules. Disable features like web browsing or plugin execution if too risky.

Evidence & Audit

  • Published LLM use policy with data classification table
  • Approved tools list with configuration guidance
  • Training materials and completion records
  • DLP rules blocking prohibited data inputs
  • Sample work products showing citation/attribution

Related Controls