AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

VoluntaryGuidelineAPAC/AustraliaHigh riskLimited riskMinimal risk

New Zealand Responsible AI Guidance for Business

Issued by

New Zealand Government

liveEffective 2025-07-01NZ-RAI-BIZVerified April 2026

The New Zealand Government has published comprehensive guidance to help businesses implement responsible AI practices in commercial contexts. It provides a practical framework covering AI governance structures, risk management processes, and accountability expectations. While non-binding, the guidance reflects government expectations for how enterprises operating in New Zealand should develop and deploy AI systems.

Applies To

Large enterpriseSMBAI developerAI deployer

Overview

The New Zealand Responsible AI Guidance for Business offers a structured approach to AI governance intended for commercial entities operating within New Zealand. It addresses the full lifecycle of AI use, from procurement and development through to deployment and ongoing monitoring, with an emphasis on risk identification and mitigation. Key provisions include guidance on establishing internal governance structures, conducting risk assessments, ensuring transparency with affected parties, and maintaining human oversight of automated decisions. The guidance does not carry statutory enforcement mechanisms but signals regulatory expectations and may inform future binding requirements or regulatory scrutiny. It aligns with international responsible AI principles, including those of the OECD AI Principles, and is intended to support businesses in preparing for a more regulated environment. Enterprises are encouraged to treat the guidance as a baseline standard for internal AI governance programs.

Key Requirements

  • Establish documented AI governance structures with clear accountability and oversight roles
  • Conduct and record risk assessments for AI systems prior to deployment, with particular attention to high-impact use cases
  • Implement transparency measures so that individuals affected by AI-driven decisions are adequately informed
  • Maintain human oversight mechanisms for consequential automated decisions, with defined escalation and review procedures
  • Develop and apply processes for ongoing monitoring of AI system performance, bias, and unintended outcomes
  • Engage with affected stakeholders, including employees and customers, when deploying AI in ways that materially affect them

What Your Organization Must Do

  • Establish a documented AI governance structure by July 1, 2025, assigning named accountability owners (e.g., a Chief AI Officer or AI Governance Committee) responsible for oversight of all AI systems in use across the business.
  • Conduct and record pre-deployment risk assessments for every AI system, prioritising high-impact use cases such as credit decisions, hiring tools, and customer-facing automation, and retain assessment records as evidence of due diligence for potential regulatory review.
  • Implement transparency disclosures for individuals affected by AI-driven decisions, including clear notices at point of interaction and accessible explanations of how automated decisions are made or influenced.
  • Define and document human oversight procedures for consequential automated decisions, specifying escalation thresholds, review timelines, and the roles authorised to override or audit AI outputs before July 1, 2025 deployment.
  • Build ongoing monitoring processes that track AI system performance, bias indicators, and unintended outcomes on a regular cadence (at minimum quarterly), with results reported to the governance body and remediation steps logged.
  • Develop a stakeholder engagement protocol requiring consultation with materially affected employees and customers prior to deploying AI systems that significantly change workflows, roles, or service delivery, and document outcomes of that engagement.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Is the New Zealand Responsible AI Guidance for Business legally binding?
No. The guidance is non-binding and carries no statutory enforcement mechanisms. However, it signals government expectations and may inform future binding regulations or influence how regulators assess AI-related conduct by businesses operating in New Zealand.
Which businesses does the NZ Responsible AI Guidance apply to?
It applies to all commercial entities operating in New Zealand, including large enterprises, SMBs, AI developers, and AI deployers. There is no minimum size threshold, though compliance effort will naturally scale with the complexity and risk profile of AI systems in use.
What are the penalties for non-compliance with the NZ Responsible AI Guidance?
There are currently no direct financial penalties, as the guidance is voluntary. However, non-compliance could attract regulatory scrutiny, reputational risk, or adverse treatment under future binding AI regulations that may be modelled on this framework.
How does the NZ Responsible AI Guidance align with the OECD AI Principles?
The guidance is explicitly designed to align with the OECD AI Principles, covering comparable themes including transparency, accountability, human oversight, and risk management. Businesses already compliant with OECD-aligned frameworks should find significant overlap with the NZ requirements.
What documentation do businesses need to maintain under the NZ Responsible AI Guidance?
Businesses should maintain records of AI governance structures, pre-deployment risk assessments, transparency disclosures, human oversight procedures, ongoing monitoring results, and stakeholder engagement outcomes. These records serve as evidence of due diligence if regulatory scrutiny arises.
Does the NZ Responsible AI Guidance impose specific requirements for high-risk AI use cases like hiring or credit decisions?
Yes. The guidance specifically flags high-impact use cases such as credit decisions and hiring tools as priority areas for pre-deployment risk assessments. Human oversight mechanisms and transparency disclosures are also emphasized for any consequential automated decisions affecting individuals.