Question 3 of 24
How do we ensure third-party AI vendors meet our standards?
Extending vendor due diligence to cover model transparency, data handling, bias testing, and contractual liability for AI outputs.
Traditional vendor reviews are not enough
Standard vendor due diligence covers financial stability, security certifications, and data processing agreements. AI vendors require additional scrutiny. You are not just evaluating a software product. You are evaluating a system that makes or influences decisions, and you are potentially liable for those decisions regardless of whether the model is yours.
The EU AI Act and emerging U.S. frameworks place obligations on AI deployers, not just developers. If you deploy a third-party AI system in a high-risk context, you may be responsible for ensuring it meets regulatory standards even if you did not build it.
What to ask vendors
Model transparency: Can the vendor explain how the model was trained, on what data, and how it makes decisions? Do they provide model cards or system cards documenting capabilities and limitations? What testing has been done for accuracy, robustness, and bias?
Data handling: What data do they use to train or fine-tune models? Does your data get used to train shared models? Where is data processed and stored? What happens to your data if you terminate the contract?
Bias and fairness testing: Has the model been tested for disparate impact across protected characteristics? What methodology was used? How are bias issues remediated, and on what timeline?
Incident response: What is their process when a model produces harmful, inaccurate, or discriminatory outputs? How quickly do they notify customers? What remedies are available?
Contractual protections
Standard vendor contracts do not adequately address AI liability. Negotiate clauses that cover: representation that the system complies with applicable AI regulations; notification obligations when the model is materially updated; your right to audit bias testing results; limitations on how your data is used for training; and indemnification for claims arising from AI outputs used in your operations.
For high-risk deployments, consider requiring vendors to carry errors and omissions insurance that covers AI-related claims. The insurance market for AI liability is still developing, but its existence and scope is a useful signal of how seriously a vendor takes its own risk.
