aigovernance.com

Global AI Regulation & Framework Directory

← Directory

Korea AI Basic Act

Korea AI Basic Act · National Assembly of the Republic of Korea; Ministry of Science and ICT (MSIT)

South Korea's foundational AI governance statute establishing risk-based obligations for AI developers and deployers, with heightened requirements for high-impact AI systems and a national AI safety infrastructure.

Overview

The Act on the Development of Artificial Intelligence and Establishment of Trust Basis (commonly referred to as the Korea AI Basic Act) was promulgated on January 22, 2025, following passage by the National Assembly in December 2024. It enters into force one year after promulgation. The Act establishes South Korea's first comprehensive legislative framework for artificial intelligence, creating a tiered regulatory structure anchored in risk classification. The legislation tasks the Ministry of Science and ICT (MSIT) with primary oversight authority and mandates the establishment of a national AI Commission and AI Safety Research Institute. The Act distinguishes between general AI systems and 'high-impact AI', defined as AI used in domains such as employment, education, finance, healthcare, criminal justice, and critical infrastructure, subjecting the latter to heightened transparency, conformity assessment, and human oversight obligations. The Act also addresses generative AI, requiring watermarking or disclosure where AI-generated content could cause public confusion. Developers and deployers of high-impact AI must conduct risk management activities, maintain documentation, and notify affected parties of significant AI-driven decisions. The Act includes provisions for government support of AI innovation, international cooperation, and the promotion of trustworthy AI standards.

Key Requirements

  • Classification of AI systems as general or high-impact based on use-case domain and potential for harm
  • Mandatory risk management frameworks for high-impact AI, including risk assessment, mitigation measures, and ongoing monitoring
  • Transparency obligations: users must be informed when interacting with high-impact AI systems
  • Human oversight requirements for high-impact AI decisions affecting individuals
  • AI-generated content disclosure or watermarking to prevent public deception
  • Documentation and record-keeping obligations for high-impact AI developers and deployers
  • Notification to affected individuals of consequential AI-driven decisions
  • Establishment of and cooperation with the national AI Commission and AI Safety Research Institute
  • Conformity assessment procedures for high-impact AI prior to deployment
  • Penalty provisions for non-compliance, including fines

Who It Affects

Developers of AI systems operating in or targeting the Korean marketDomestic and foreign enterprises deploying high-impact AI systems in KoreaOrganizations in regulated sectors including healthcare, finance, employment, education, and critical infrastructureProviders of generative AI services distributed in KoreaMultinational enterprises with Korean subsidiaries or Korean-market products

Effective Date

2026-01-22

Official source →