AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

How We Track AI Regulation

Our methodology for monitoring, verifying, and publishing AI governance information — from primary sources to published entries.

Daily monitoring

Our automated pipeline runs every day and scans four distinct areas of the AI governance landscape:

  • Regulatory developments — binding and quasi-binding instruments from the EU, US, UK, international standards bodies (ISO, OECD, UNESCO), and Asia-Pacific regulators (China, Japan, Singapore, South Korea, Australia).
  • Corporate AI governance — safety commitments, red-teaming disclosures, responsible AI reports, incident reports, and policy changes from major technology companies and enterprises deploying AI.
  • Model releases — new foundation model releases and their compliance implications: acceptable use policies, safety evaluations, deployment restrictions, and known limitations.
  • Expert commentary — practitioner-focused analysis from recognized researchers, ethicists, and compliance professionals at leading think tanks and policy institutes.

Source criteria

We only publish directory entries derived from authoritative primary sources:

  • Official government publications and regulatory body websites
  • Standards organization outputs (ISO, NIST, OECD, UNESCO, ITU)
  • Direct company disclosures: press releases, official blog posts, regulatory filings
  • Court decisions and enforcement actions from competent authorities

We do not publish directory entries based on secondary reporting alone. Every entry links directly to its primary source document wherever one is publicly available.

Classification and review

Each finding is automatically classified into one of two destinations:

  • Directory entries — binding regulations, voluntary frameworks, guidelines, and pending legislation. These require human editorial review before publication. Our editors verify the source, confirm the summary accurately reflects the primary document, and validate that compliance implications are correctly characterized.
  • News items — enforcement actions, research, corporate policy changes, and model releases. These are published with a lighter-touch review focused on factual accuracy and source attribution.

Items classified as uncertain, highly significant, or from unfamiliar sources are always flagged for human review regardless of type.

Update policy

Directory entries are updated when material changes occur:

  • Amendments or revisions to the underlying instrument
  • New or revised effective dates and compliance deadlines
  • Significant enforcement guidance or regulatory clarification
  • Changes in status (e.g. proposed to enacted, or superseded)

The “verified” date shown on each entry reflects when our editors last reviewed and confirmed the entry. Minor administrative changes that do not affect compliance obligations are not published.

Weekly recap

Every Friday, we publish a curated summary of the week’s most significant AI governance developments, with trend analysis and actionable guidance for enterprise compliance teams. The recap draws on all items published during the week and is reviewed by the editorial team before publication.

Corrections and feedback

We correct errors promptly. If you find inaccurate information, an outdated entry, a missing regulation, or a broken source link, please contact us at corrections@aigovernance.com. Material corrections are noted on the affected entry. You can also contact us to suggest a regulation or framework we should be tracking.