AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

← AI Governance Playbook

Question 15 of 24

How are we managing third-party AI risks?

Governing the use of external AI APIs and vendor-embedded models, including data handling, documentation requirements, and ongoing monitoring.

You own the risk, not just the vendor relationship

When you send data to an external AI API, you remain responsible for what happens to that data and for the outputs that come back. Terms of service from major AI providers generally disclaim liability for output accuracy and shift responsibility for appropriate use to the customer. Regulatory frameworks including the EU AI Act place obligations on AI deployers, not just developers, which means your use of a third-party model does not transfer your compliance obligations to the vendor.

This requires a different approach to vendor management than traditional software procurement. You are not just evaluating whether the vendor is reliable and secure. You are evaluating whether their model behavior is compatible with your regulatory obligations and risk tolerance.

Data handling and confidentiality

Before sending any data to an external AI API, answer three questions: Is the data used to train or improve the vendor's shared models? Where is the data processed and stored, and does that create cross-border transfer issues? What happens to the data after inference, and what is the retention period?

Most enterprise API agreements allow customers to opt out of data use for training, but this is often not the default. Review your agreements and ensure opt-outs are in place before processing sensitive or confidential data. For particularly sensitive use cases, evaluate whether on-premises or private cloud deployment of models is warranted to avoid sending data to third-party infrastructure at all.

Documentation and audit rights

Request model cards, system cards, or equivalent documentation from all AI vendors whose models are used in consequential decisions. This documentation should describe the model's intended use cases, performance characteristics, known limitations, and bias evaluation results. If a vendor cannot provide this documentation, treat it as a risk signal.

Build model change notification into your vendor agreements. AI models are updated frequently, sometimes in ways that materially change their behavior. A model that passed your pre-deployment testing may behave differently after a vendor update. Require vendors to notify you of material model changes with sufficient lead time for you to re-evaluate before the change takes effect.