AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← News
ResearchGlobal2026-05-06

ISACA Article Calls for Integrated Privacy, Cybersecurity, and Legal Functions in AI Governance Programs

ISACA, the global professional association focused on IT governance and audit, published Collaboration and the New Triad of AI Governance on December 31, 2025. The article presents a structural argument that AI risk cannot be managed effectively when privacy officers, cybersecurity teams, and legal counsel operate independently, because AI systems create exposures that cut across all three domains simultaneously. Drawing on the EU AI Act, the NIST AI RMF, and multiple U.S. executive orders as reference points, the article identifies transparency and accountability as shared obligations that no single function can satisfy alone. While the piece does not introduce a new standard or impose binding obligations, it reflects ISACA's institutional position on how enterprise governance programs should be organized.

The article arrives at a moment when major regulatory frameworks are imposing multi-domain compliance obligations that were not previously coordinated. The EU AI Act, for example, requires conformity assessments, data governance documentation, and cybersecurity robustness measures simultaneously, meaning a compliance program focused solely on legal review or solely on data protection will be structurally incomplete. The NIST AI RMF similarly organizes risk management across governance, mapping, measurement, and management functions that span organizational silos. ISACA's framing reflects a broader trend in which regulators and standards bodies are treating AI risk as an enterprise-wide concern rather than a technical or legal matter handled by a single team, a pattern also visible in guidance from bodies such as the ISO with its ISO/IEC 42001 AI management system standard and in sector-specific guidance from financial regulators.

For enterprise compliance teams, the practical implication of this guidance is a need to assess whether current governance structures assign clear ownership for AI-related obligations that sit at the intersection of privacy, security, and legal risk. Teams should evaluate whether their AI inventories capture not only the systems in use but also the risk classifications and applicable regulatory requirements under each relevant framework, since the EU AI Act's prohibited and high-risk system categories trigger different responses from legal, privacy, and security functions. Organizations operating across multiple jurisdictions should prioritize building a cross-functional AI governance committee with defined decision rights, documented escalation paths, and a shared risk register rather than relying on ad hoc coordination. Compliance and risk officers should also use this guidance to support internal business cases for dedicated AI governance resources, given that ISACA's endorsement of the triad model provides an authoritative reference point when engaging executive leadership or boards on program investment.

AI governancecross-functional complianceNIST AI RMFEU AI Actcybersecurity