Vendor/Model Evaluation Scorecard

Rubric GOVERN

Purpose

Structured evaluation rubric for assessing AI vendors and models across security, privacy, performance, and risk dimensions.

Related Controls

ISO A.9 NIST GV-6 OWASP LLM05

1. Evaluation Overview

Capture basic information about the vendor and the evaluation context.

Vendor/Model Name: [VENDOR NAME]

Evaluation Date: [DATE]

Evaluator: [NAME], [ROLE TITLE]

Use Case: [DESCRIPTION OF INTENDED USE]

Data Classification: Public / Internal / Confidential / Restricted

Risk Tier: Low / Medium / High

Decision: Approved / Approved with Conditions / Rejected

Next Review Date: [DATE]

2. Evaluation Criteria

Score each criterion 1-5 (1=Poor, 5=Excellent). Minimum passing score is 3.0 average with no category below 2.

CategoryCriterionScore (1-5)Evidence/Notes
SecurityData encryption (transit and rest)
Access controls and authentication
Vulnerability management and patching
PrivacyData processing agreement (DPA) available
Data retention and deletion policies
GDPR/CCPA compliance documentation
PerformanceAccuracy/quality benchmarks provided
Latency and throughput SLAs
Uptime guarantees and incident history
TransparencyModel card or documentation available
Training data provenance disclosed
Bias testing and fairness reporting
OperationalAPI stability and versioning policy
Support responsiveness and SLA
Exit strategy and data portability

3. Scoring Summary

Aggregate scores and determine the final recommendation.

CategoryAverage ScorePass/Fail
Security
Privacy
Performance
Transparency
Operational
Overall Average

Pass Criteria

  • Overall average must be 3.0 or higher
  • No individual category average below 2.0
  • Security category must be 3.0 or higher for any system processing Internal or higher data
  • Privacy category must be 3.0 or higher for any system processing PII

4. Conditions & Risk Acceptance

Document any conditions that must be met before deployment or risks that are being accepted.

Conditions for Approval

  1. [CONDITION — e.g., "Vendor must execute DPA before data processing begins"]
  2. [CONDITION — e.g., "API keys must be stored in secrets management, not application code"]
  3. [CONDITION — e.g., "Output filtering must be implemented before customer-facing deployment"]

Accepted Risks

  1. [RISK — e.g., "Model may produce inaccurate outputs; mitigated by mandatory human review"]
  2. [RISK — e.g., "Vendor does not provide training data provenance; mitigated by output monitoring"]

Approvals

  • Security Review: [NAME] — [DATE] — Approved / Rejected
  • Privacy Review: [NAME] — [DATE] — Approved / Rejected
  • Business Sponsor: [NAME] — [DATE] — Approved / Rejected
← Back to all templates