G7 Hiroshima AI Code of Conduct
Issued by
G7 Hiroshima AI Process (endorsed by G7 Leaders; administered through the G7 Digital and Technology Track)
A voluntary international code of conduct issued under the G7 Hiroshima AI Process, setting out eleven guiding principles and corresponding actions for developers and operators of advanced AI systems, with a particular focus on frontier and general-purpose AI models. The code operates as a soft-law commitment mechanism intended to shape responsible AI development globally.
Applies To
Overview
The G7 Hiroshima AI Code of Conduct was adopted on October 30, 2023 as part of the broader Hiroshima AI Process (HAIP), launched by G7 Leaders at the Hiroshima Summit in May 2023 in response to the rapid proliferation of large-scale generative AI and frontier model capabilities. The process tasked G7 Digital Ministers with developing practical governance instruments for advanced AI on an accelerated timeline. The Code of Conduct was the principal output of that mandate and was accompanied by the G7 Hiroshima Process International Guiding Principles for Advanced AI Systems. The code is explicitly voluntary and does not carry legal force in any jurisdiction. However, it has significant soft-law weight: G7 governments, representing the United States, United Kingdom, European Union, Germany, France, Italy, Japan, and Canada, collectively encouraged AI developers and deployers to adhere to its provisions, and several jurisdictions subsequently incorporated its principles into national AI policy frameworks and procurement standards. The eleven principles cover the full AI development and deployment lifecycle, addressing risk identification, incident reporting, cybersecurity, content provenance, transparency toward end users, responsible information sharing, privacy, bias mitigation, and support for global AI safety research. A key structural feature is the code's emphasis on frontier and general-purpose AI systems, reflecting concern about the distinctive systemic risks posed by large foundation models. The Hiroshima AI Process continued into 2024, with G7 members publishing a stocktaking report in June 2024 assessing voluntary uptake and considering alignment with the AI Safety Institutes network established at the UK AI Safety Summit (Bletchley Park, November 2023). Enterprises that publicly commit to the code, as several major AI developers have done, should treat adherence as a reputational and contractual matter even in the absence of legal enforceability.
Key Requirements
- •Principle 1 — Risk identification and mitigation: Take appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle, including before and during deployment.
- •Principle 2 — Incident reporting: Report relevant AI safety incidents to appropriate authorities and share information with other developers and the public as appropriate.
- •Principle 3 — Cybersecurity: Invest in and implement robust cybersecurity controls and physical security measures to protect AI model weights and systems.
- •Principle 4 — Trust of users: Build systems that enable users to understand AI-generated content, including through technical means such as watermarking or content provenance mechanisms.
- •Principle 5 — Risk-based development: Adopt risk-based approaches to AI development, including internal and external red-teaming and adversarial testing prior to deployment.
- •Principle 6 — Transparency reporting: Publish transparency reports on a regular basis covering AI capabilities, limitations, safety evaluations, and risk mitigation approaches.
- •Principle 7 — Responsible information sharing: Prioritise responsible disclosure of discovered AI-related vulnerabilities with governments, civil society, and academic communities.
- •Principle 8 — Responsible AI ecosystem: Develop technical standards, best practices, and certification mechanisms to support responsible AI across the supply chain.
- •Principle 9 — Privacy and data governance: Prioritise privacy protection throughout AI development, including by implementing appropriate data governance frameworks.
- •Principle 10 — Fairness and bias: Work to advance AI systems that do not perpetuate or amplify harmful biases, and take steps to ensure outcomes are fair and non-discriminatory.
- •Principle 11 — Global AI safety research: Support international collaboration on AI safety research and work to develop common technical standards and evaluation methodologies.
What Your Organization Must Do
- →Conduct a gap analysis against all eleven principles within 60 days, assigning a named owner (Chief AI Officer or equivalent) to each principle, and document findings in a board-level risk register.
- →Establish a pre-deployment evaluation protocol for frontier and general-purpose AI systems that mandates internal and external red-teaming, adversarial testing, and a documented risk acceptance decision before any production release.
- →Implement an AI incident reporting procedure that specifies escalation thresholds, identifies the relevant national competent authorities in each G7 jurisdiction where you operate, and sets a target notification window aligned with the most stringent applicable national framework.
- →Publish an annual AI transparency report covering model capabilities, known limitations, safety evaluation results, and risk mitigation measures; schedule the first report within 12 months of committing to the code, and assign the General Counsel to review for accuracy and consistency with other regulatory disclosures.
- →Integrate the code's principles into third-party vendor due diligence and AI procurement questionnaires, requiring suppliers of foundation models or AI infrastructure to attest to alignment with the code or to disclose material gaps as a condition of contract renewal.
- →Activate a content provenance and watermarking workstream, tasking the Chief Information Security Officer with assessing available technical standards (such as C2PA) and delivering an implementation roadmap within 90 days to meet Principle 4 obligations on user trust.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Is the G7 Hiroshima AI Code of Conduct legally binding?
- No. The code is explicitly voluntary and carries no legal force in any jurisdiction. However, G7 governments have encouraged adherence, and several national AI policy frameworks and government procurement standards have begun incorporating its principles, giving it meaningful soft-law weight for companies operating in G7 markets.
- Which companies does the Hiroshima AI Code of Conduct apply to?
- The code targets developers and operators of frontier and general-purpose AI systems, including large AI laboratories, major cloud and platform providers, and enterprise technology vendors deploying foundation models at scale. Enterprises in financial services, healthcare, and defence that publicly commit to the code should treat adherence as part of their ESG and vendor due diligence obligations.
- What are the consequences of non-compliance with the Hiroshima AI Code of Conduct?
- There are no direct fines or regulatory penalties because the code is voluntary. The primary risks are reputational: companies that publicly commit to the code but fail to adhere face scrutiny from G7 governments, civil society, and counterparties. Non-alignment may also become a disqualifying factor in government AI procurement in G7 member states.
- How does the Hiroshima AI Code of Conduct relate to the EU AI Act?
- The two instruments are complementary but distinct. The EU AI Act is binding law with enforcement mechanisms; the Hiroshima code is a voluntary international framework. Several EU AI Act obligations, including risk assessments, transparency reporting, and incident reporting for general-purpose AI models, overlap substantially with the code's eleven principles, so joint compliance programs are feasible.
- What does the Hiroshima AI Code of Conduct require on incident reporting?
- Principle 2 requires organisations to report relevant AI safety incidents to appropriate authorities and share information with other developers and the public as appropriate. Companies operating across multiple G7 jurisdictions should map this obligation to the most stringent national notification window in each market and establish a documented escalation procedure.
- Does the Hiroshima AI Code of Conduct address watermarking and AI-generated content disclosure?
- Yes. Principle 4 requires developers to build systems that help users identify AI-generated content, including through technical mechanisms such as watermarking and content provenance tools. The G7 process specifically references frameworks like C2PA as candidate standards, and compliance officers should treat implementation of a provenance roadmap as a near-term deliverable for any public commitment to the code.
