AI-Generated Code Review Standards
What This Requires
Require human review of all AI-generated code before merge to production branches. Reviews must check for: logic errors, security vulnerabilities, license compliance, and hallucinated APIs or dependencies.
Why It Matters
LLMs hallucinate non-existent functions, introduce subtle bugs, and copy code from training data with restrictive licenses. Blind acceptance creates technical debt and legal risk.
How To Implement
Define Review Checklist
Reviewers must verify: code compiles and tests pass, no hardcoded secrets, dependencies exist in package registry, functions/APIs are real (not hallucinated), license compatibility (if LLM suggests third-party code).
Tag AI Contributions
Require commit message prefix ("AI-assisted:") or PR label. This enables metrics tracking and targeted audits.
Static Analysis Integration
Run SAST tools (SonarQube, Semgrep) on all PRs. Flag high-severity findings for mandatory human review even if tests pass.
Training & Culture
Train engineers on common LLM pitfalls (hallucination, outdated patterns, copy-paste bias). Encourage skepticism: "trust but verify" mindset.
Evidence & Audit
- Code review checklist specific to AI-generated code
- PR records showing human review and approval before merge
- Commit messages or labels identifying AI-assisted code
- SAST scan results integrated into CI/CD pipeline
- Training materials and completion records