Korea AI Basic Act
Issued by
National Assembly of the Republic of Korea; Ministry of Science and ICT (MSIT)
South Korea's foundational AI governance statute establishing risk-based obligations for AI developers and deployers, with heightened requirements for high-impact AI systems and a national AI safety infrastructure.
Applies To
Overview
The Act on the Development of Artificial Intelligence and Establishment of Trust Basis (commonly referred to as the Korea AI Basic Act) was promulgated on January 22, 2025, following passage by the National Assembly in December 2024. It enters into force one year after promulgation. The Act establishes South Korea's first comprehensive legislative framework for artificial intelligence, creating a tiered regulatory structure anchored in risk classification. The legislation tasks the Ministry of Science and ICT (MSIT) with primary oversight authority and mandates the establishment of a national AI Commission and AI Safety Research Institute. The Act distinguishes between general AI systems and 'high-impact AI', defined as AI used in domains such as employment, education, finance, healthcare, criminal justice, and critical infrastructure, subjecting the latter to heightened transparency, conformity assessment, and human oversight obligations. The Act also addresses generative AI, requiring watermarking or disclosure where AI-generated content could cause public confusion. Developers and deployers of high-impact AI must conduct risk management activities, maintain documentation, and notify affected parties of significant AI-driven decisions. The Act includes provisions for government support of AI innovation, international cooperation, and the promotion of trustworthy AI standards.
Key Requirements
- •Classification of AI systems as general or high-impact based on use-case domain and potential for harm
- •Mandatory risk management frameworks for high-impact AI, including risk assessment, mitigation measures, and ongoing monitoring
- •Transparency obligations: users must be informed when interacting with high-impact AI systems
- •Human oversight requirements for high-impact AI decisions affecting individuals
- •AI-generated content disclosure or watermarking to prevent public deception
- •Documentation and record-keeping obligations for high-impact AI developers and deployers
- •Notification to affected individuals of consequential AI-driven decisions
- •Establishment of and cooperation with the national AI Commission and AI Safety Research Institute
- •Conformity assessment procedures for high-impact AI prior to deployment
- •Penalty provisions for non-compliance, including fines
What Your Organization Must Do
- →Audit your AI product and service portfolio against the Act's high-impact domain list (employment, education, finance, healthcare, criminal justice, critical infrastructure) by Q3 2025 to classify each system as general or high-impact before the January 22, 2026 effective date.
- →Assign a designated compliance lead within the legal or technology function to own high-impact AI obligations, coordinate with MSIT guidance as it is issued, and liaise with the national AI Commission once established.
- →Build and document a risk management framework for each high-impact AI system covering risk assessment methodology, mitigation controls, and ongoing monitoring cadence; retain records sufficient to demonstrate conformity assessment readiness prior to deployment.
- →Implement user-facing transparency notices for all high-impact AI touchpoints and human oversight procedures for consequential individual decisions, ensuring both are in place before go-live on January 22, 2026.
- →Deploy watermarking or clear disclosure mechanisms on generative AI outputs distributed in Korea where those outputs could cause public confusion, and verify technical feasibility of chosen methods with your engineering team by Q4 2025.
- →Map extraterritorial exposure for multinational subsidiaries and Korean-market product lines, confirm whether foreign-operated systems trigger local obligations, and update vendor and partner contracts to allocate compliance responsibilities between developers and deployers.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- When does the Korea AI Basic Act take effect and what is the compliance deadline?
- The Act was promulgated on January 22, 2025 and enters into force on January 22, 2026. Organizations deploying AI systems in Korea must have compliance measures in place by that date, leaving roughly 12 months to classify systems, build risk management frameworks, and implement transparency controls.
- Which AI systems qualify as high-impact AI under the Korea AI Basic Act?
- High-impact AI is defined by use-case domain rather than technical architecture. Systems used in employment, education, finance, healthcare, criminal justice, and critical infrastructure fall within the high-impact category and trigger heightened obligations including conformity assessment, human oversight, and individual notification requirements.
- Does the Korea AI Basic Act apply to foreign companies that do not have a Korean subsidiary?
- Yes. The Act applies to developers and deployers of AI systems operating in or targeting the Korean market, regardless of where the company is incorporated. Multinationals with Korean-market products or services should assess extraterritorial exposure and allocate compliance responsibilities between developers and deployers in their contracts.
- What conformity assessment process is required before deploying high-impact AI in Korea?
- The Act mandates conformity assessment procedures for high-impact AI prior to deployment, but detailed procedural rules will be issued through MSIT guidance. Organizations should build documentation and risk management records sufficient to demonstrate conformity readiness and monitor regulatory guidance as it is published ahead of the January 2026 effective date.
- How does the Korea AI Basic Act treat generative AI compared to the EU AI Act?
- Both regimes impose disclosure obligations on generative AI outputs, but the Korea AI Basic Act focuses specifically on watermarking or disclosure where AI-generated content could cause public confusion, rather than the EU AI Act's broader transparency and copyright provenance requirements. Korean compliance programs should verify that watermarking or disclosure mechanisms are technically feasible and deployed before go-live.
- What are the penalties for non-compliance with the Korea AI Basic Act?
- The Act includes penalty provisions for non-compliance, including fines, but specific penalty amounts and enforcement procedures will be detailed in implementing regulations issued by MSIT. Compliance officers should track subsidiary legislation closely, as penalty thresholds and enforcement mechanisms are expected to be clarified before the January 22, 2026 effective date.
