aigovernance.com

Global AI Regulation & Framework Directory

← Directory

G7 Hiroshima AI Code of Conduct

Hiroshima AI CoC · G7 Hiroshima AI Process (endorsed by G7 Leaders; administered through the G7 Digital and Technology Track)

A voluntary international code of conduct issued under the G7 Hiroshima AI Process, setting out eleven guiding principles and corresponding actions for developers and operators of advanced AI systems, with a particular focus on frontier and general-purpose AI models. The code operates as a soft-law commitment mechanism intended to shape responsible AI development globally.

Overview

The G7 Hiroshima AI Code of Conduct was adopted on October 30, 2023 as part of the broader Hiroshima AI Process (HAIP), launched by G7 Leaders at the Hiroshima Summit in May 2023 in response to the rapid proliferation of large-scale generative AI and frontier model capabilities. The process tasked G7 Digital Ministers with developing practical governance instruments for advanced AI on an accelerated timeline. The Code of Conduct was the principal output of that mandate and was accompanied by the G7 Hiroshima Process International Guiding Principles for Advanced AI Systems. The code is explicitly voluntary and does not carry legal force in any jurisdiction. However, it has significant soft-law weight: G7 governments, representing the United States, United Kingdom, European Union, Germany, France, Italy, Japan, and Canada, collectively encouraged AI developers and deployers to adhere to its provisions, and several jurisdictions subsequently incorporated its principles into national AI policy frameworks and procurement standards. The eleven principles cover the full AI development and deployment lifecycle, addressing risk identification, incident reporting, cybersecurity, content provenance, transparency toward end users, responsible information sharing, privacy, bias mitigation, and support for global AI safety research. A key structural feature is the code's emphasis on frontier and general-purpose AI systems, reflecting concern about the distinctive systemic risks posed by large foundation models. The Hiroshima AI Process continued into 2024, with G7 members publishing a stocktaking report in June 2024 assessing voluntary uptake and considering alignment with the AI Safety Institutes network established at the UK AI Safety Summit (Bletchley Park, November 2023). Enterprises that publicly commit to the code, as several major AI developers have done, should treat adherence as a reputational and contractual matter even in the absence of legal enforceability.

Key Requirements

  • Principle 1 — Risk identification and mitigation: Take appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle, including before and during deployment.
  • Principle 2 — Incident reporting: Report relevant AI safety incidents to appropriate authorities and share information with other developers and the public as appropriate.
  • Principle 3 — Cybersecurity: Invest in and implement robust cybersecurity controls and physical security measures to protect AI model weights and systems.
  • Principle 4 — Trust of users: Build systems that enable users to understand AI-generated content, including through technical means such as watermarking or content provenance mechanisms.
  • Principle 5 — Risk-based development: Adopt risk-based approaches to AI development, including internal and external red-teaming and adversarial testing prior to deployment.
  • Principle 6 — Transparency reporting: Publish transparency reports on a regular basis covering AI capabilities, limitations, safety evaluations, and risk mitigation approaches.
  • Principle 7 — Responsible information sharing: Prioritise responsible disclosure of discovered AI-related vulnerabilities with governments, civil society, and academic communities.
  • Principle 8 — Responsible AI ecosystem: Develop technical standards, best practices, and certification mechanisms to support responsible AI across the supply chain.
  • Principle 9 — Privacy and data governance: Prioritise privacy protection throughout AI development, including by implementing appropriate data governance frameworks.
  • Principle 10 — Fairness and bias: Work to advance AI systems that do not perpetuate or amplify harmful biases, and take steps to ensure outcomes are fair and non-discriminatory.
  • Principle 11 — Global AI safety research: Support international collaboration on AI safety research and work to develop common technical standards and evaluation methodologies.

Effective Date

2023-10-30

Official source →