AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← AI Governance Playbook

Question 15 of 34

How are we managing third-party AI risks?

Published by AI Governance Institute · Practical Governance for Enterprise AI

Governing the use of external AI APIs and vendor-embedded models, including data handling, documentation requirements, and ongoing monitoring.

If you only do 3 things, do this:

  1. 1.Opt out of training data use before sending any sensitive data to external AI APIs. Most enterprise agreements allow this, but it is not the default.
  2. 2.Request model cards or system cards from every vendor whose models influence consequential decisions. A vendor who can't provide documentation is a risk signal.
  3. 3.Require model change notification in your vendor contracts. A model update can break your compliance posture, and you won't know it happened unless you ask.

The Situation

Who this is for: Procurement, legal, and compliance teams managing external AI API and model vendor relationships

When you need this: When onboarding an AI API vendor, during contract renewal, or after a vendor model update causes unexpected behavior

The Decision

Are we managing the compliance and liability risks of our external AI dependencies, or are we assuming vendors have it covered?

The Steps

  1. 1Inventory all external AI APIs and vendor-embedded models currently in use
  2. 2Review each vendor agreement for: training data opt-out, data retention, transfer mechanisms, liability caps, indemnification
  3. 3Request model cards or equivalent documentation from each vendor; flag those who cannot provide it
  4. 4Activate training data opt-outs for any vendor where this is not confirmed
  5. 5Negotiate model change notification requirements into new and renewed contracts
  6. 6Establish a periodic review cadence for each vendor: model updates, DPA compliance, risk re-scoring

The Artifacts

  • Third-party AI vendor inventory template (API/model, use case, data sent, contract review status)
  • Vendor risk assessment questionnaire
  • Contract review checklist for AI APIs (required clauses)
  • Model card documentation request letter template
  • Training data opt-out tracking log

The Output

A complete inventory of external AI dependencies with contract review complete, training data opt-outs confirmed, and monitoring processes in place.

You own the risk, not just the vendor relationship

When you send data to an external AI API, you remain responsible for what happens to that data and for the outputs that come back. Terms of service from major AI providers generally disclaim liability for output accuracy and shift responsibility for appropriate use to the customer. Regulatory frameworks including the EU AI Act place obligations on AI deployers, not just developers, which means your use of a third-party model does not transfer your compliance obligations to the vendor.

This requires a different approach to vendor management than traditional software procurement. You are not just evaluating whether the vendor is reliable and secure. You are evaluating whether their model behavior is compatible with your regulatory obligations and risk tolerance.

Data handling and confidentiality

Before sending any data to an external AI API, answer three questions: Is the data used to train or improve the vendor's shared models? Where is the data processed and stored, and does that create cross-border transfer issues? What happens to the data after inference, and what is the retention period?

Most enterprise API agreements allow customers to opt out of data use for training, but this is often not the default. Review your agreements and ensure opt-outs are in place before processing sensitive or confidential data. For particularly sensitive use cases, evaluate whether on-premises or private cloud deployment of models is warranted to avoid sending data to third-party infrastructure at all.

Documentation and audit rights

Request model cards, system cards, or equivalent documentation from all AI vendors whose models are used in consequential decisions. This documentation should describe the model's intended use cases, performance characteristics, known limitations, and bias evaluation results. If a vendor cannot provide this documentation, treat it as a risk signal.

Build model change notification into your vendor agreements. AI models are updated frequently, sometimes in ways that materially change their behavior. A model that passed your pre-deployment testing may behave differently after a vendor update. Require vendors to notify you of material model changes with sufficient lead time for you to re-evaluate before the change takes effect.

Governance Controls

Operational controls that implement the guidance in this playbook.