Question 9 of 24
How do we maintain data privacy compliance when using AI?
Addressing training data sourcing, data minimization, cross-border transfers, and the right to explanation under GDPR and CCPA.
AI introduces new privacy risks to existing obligations
GDPR and CCPA were not written with large language models in mind, but they apply to AI systems that process personal data. The principles of purpose limitation, data minimization, and storage limitation create constraints on how personal data can be used in AI training and inference. Many common AI deployment patterns are in tension with these principles.
The key privacy questions for AI are: What personal data is being processed, at what stage (training, fine-tuning, inference, output), for what purpose, under what legal basis, and with what retention and deletion controls? If you cannot answer these questions for each AI system in your inventory, you have a privacy gap.
Training data and data subject rights
If personal data was used to train a model, data subjects may have rights with respect to that data, including the right to erasure under GDPR Article 17. Machine unlearning, the technical process for removing specific data from a trained model, is an active area of research but not yet reliably achievable for large models. Organizations should consider this limitation before using personal data in training and prefer approaches that achieve the same objectives with synthetic or anonymized data.
GDPR Article 22 restricts solely automated decision-making that produces legal or similarly significant effects on individuals, and requires that data subjects be able to obtain human review, express their point of view, and contest the decision. If your AI systems make or significantly influence decisions about individuals, you need a process for exercising these rights.
Cross-border data transfers
Many AI vendors process data in the United States, which creates GDPR transfer requirements for organizations subject to EU data protection law. Verify that your AI vendors have appropriate transfer mechanisms in place: Standard Contractual Clauses, Binding Corporate Rules, or reliance on the EU-U.S. Data Privacy Framework where applicable.
Review your AI vendors' data processing agreements specifically for AI-related provisions. Standard DPAs often do not address whether customer data is used for model training, where inference occurs, or how subprocessors used for AI infrastructure are managed. Request AI-specific addenda if the vendor's standard DPA does not cover these points.
