AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← AI Governance Playbook

Question 3 of 34

How do we ensure third-party AI vendors meet our standards?

Published by AI Governance Institute · Practical Governance for Enterprise AI

Extending vendor due diligence to cover model transparency, data handling, bias testing, and contractual liability for AI outputs.

If you only do 3 things, do this:

  1. 1.Ask every AI vendor three questions before you go live: How was the model trained? Does my data train shared models? What's your process when the model produces harmful outputs?
  2. 2.Audit your existing vendor contracts for AI liability gaps — standard contracts almost never cover AI adequately.
  3. 3.Require vendors to notify you before material model updates. One update can silently break your compliance posture.

The Situation

Who this is for: Procurement, legal, and compliance teams evaluating or renewing AI vendor relationships

When you need this: Before signing an AI vendor contract, during annual vendor reviews, or after an AI incident involving a third-party system

The Decision

Does this vendor's AI system meet our standards for transparency, data handling, and liability — and are we contractually protected if it doesn't?

The Steps

  1. 1Identify all current and prospective AI vendors, including AI features embedded in existing SaaS products
  2. 2Send each material vendor a due diligence questionnaire (model transparency, data handling, bias testing, incident response)
  3. 3Review data processing agreements specifically for AI provisions (training data use, inference location, subprocessors)
  4. 4Assess contractual gaps: liability caps, indemnification, model change notification, audit rights
  5. 5Negotiate AI-specific clauses for any vendor with material risk exposure
  6. 6Establish ongoing monitoring: periodic attestation, model change notifications, and incident reporting SLAs

The Artifacts

  • AI vendor due diligence questionnaire (25–30 questions across four domains)
  • AI contract clause library (training data use, liability, change notification, audit rights, indemnification)
  • Vendor risk scorecard template
  • Model card / system card request template

The Output

A documented due diligence record for every material AI vendor, with gaps identified and contractual protections in place for high-risk relationships.

Traditional vendor reviews are not enough

Standard vendor due diligence covers financial stability, security certifications, and data processing agreements. AI vendors require additional scrutiny. You are not just evaluating a software product. You are evaluating a system that makes or influences decisions, and you are potentially liable for those decisions regardless of whether the model is yours.

The EU AI Act and emerging U.S. frameworks place obligations on AI deployers, not just developers. If you deploy a third-party AI system in a high-risk context, you may be responsible for ensuring it meets regulatory standards even if you did not build it.

What to ask vendors

Model transparency: Can the vendor explain how the model was trained, on what data, and how it makes decisions? Do they provide model cards or system cards documenting capabilities and limitations? What testing has been done for accuracy, robustness, and bias?

Data handling: What data do they use to train or fine-tune models? Does your data get used to train shared models? Where is data processed and stored? What happens to your data if you terminate the contract?

Bias and fairness testing: Has the model been tested for disparate impact across protected characteristics? What methodology was used? How are bias issues remediated, and on what timeline?

Incident response: What is their process when a model produces harmful, inaccurate, or discriminatory outputs? How quickly do they notify customers? What remedies are available?

Contractual protections

Standard vendor contracts do not adequately address AI liability. Negotiate clauses that cover: representation that the system complies with applicable AI regulations; notification obligations when the model is materially updated; your right to audit bias testing results; limitations on how your data is used for training; and indemnification for claims arising from AI outputs used in your operations.

For high-risk deployments, consider requiring vendors to carry errors and omissions insurance that covers AI-related claims. The insurance market for AI liability is still developing, but its existence and scope is a useful signal of how seriously a vendor takes its own risk.