Interim Measures for the Management of Generative Artificial Intelligence Services
Issued by
Cyberspace Administration of China (CAC), in conjunction with the National Development and Reform Commission (NDRC), Ministry of Education, Ministry of Science and Technology (MOST), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security, and National Radio and Television Administration (NRTA)
China's Interim Measures for the Management of Generative AI Services impose mandatory obligations on providers of generative AI products and services to the Chinese public, covering training data governance, content safety, algorithmic transparency, and registration requirements.
Applies To
Overview
The Interim Measures for the Management of Generative Artificial Intelligence Services (生成式人工智能服务管理暂行办法) were jointly issued by seven Chinese government authorities and came into force on 15 August 2023. They represent China's first comprehensive regulatory framework specifically targeting generative AI, following earlier sector-specific measures on algorithmic recommendations (2022) and deep synthesis technology (2022), which together form a layered AI regulatory architecture. The Measures apply to organisations and individuals that provide generative AI services to the public within China, including text, image, audio, video, and code generation. Providers must ensure that training data is lawfully obtained, that generated content does not violate Chinese law or undermine socialist core values, and that effective mechanisms are in place to prevent the generation of prohibited content categories including content that subverts state power, endangers national security, spreads disinformation, infringes intellectual property, or discriminates against individuals. Providers are required to implement security assessments and register their algorithms with the CAC before offering services to the public, consistent with obligations under China's Algorithm Recommendation Regulations. User identity verification (real-name registration) is mandatory, and providers must retain logs of user inputs and generated outputs for a minimum period. Labelling requirements apply to AI-generated content to enable identification. Providers must establish complaint and reporting mechanisms and cooperate with regulatory investigations. The Measures include a principle of proportionality for obligations applicable to providers offering services only within enterprise environments or in closed networks, which are subject to a reduced compliance burden. The CAC has signalled that the Interim Measures will be revised and potentially elevated to a permanent administrative regulation as experience accumulates.
Key Requirements
- •Conduct and submit a security assessment to the CAC prior to public launch of generative AI services, consistent with the Measures for Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilisation Capabilities.
- •Register algorithms used in generative AI services through the CAC's algorithm filing and registration system.
- •Ensure all training data is lawfully obtained and does not infringe intellectual property rights; document data sourcing and processing methods.
- •Implement content moderation mechanisms to prevent generation of content prohibited under Chinese law, including content endangering national security, spreading disinformation, or violating personal privacy.
- •Enforce real-name registration for all users accessing generative AI services.
- •Retain logs of user inputs and AI-generated outputs for a minimum of six months.
- •Label AI-generated content in a manner that enables users and regulators to identify it as machine-generated.
- •Establish accessible user complaint and reporting channels; respond to user reports within required timeframes.
- •Provide users with clear service agreements disclosing the nature and limitations of the generative AI service.
- •Cooperate fully with CAC and co-issuing authority inspections, data requests, and investigations.
- •Protect the lawful rights of users, including data subject rights under the Personal Information Protection Law (PIPL).
What Your Organization Must Do
- →Complete and submit the CAC security assessment for each generative AI service before any public launch in China; assign the Chief Compliance Officer as the accountable owner and build at least 60 to 90 days into project timelines to accommodate review.
- →Register all algorithms powering generative AI services through the CAC algorithm filing system prior to go-live, and maintain a registry of filed algorithms internally so that updates or new model versions trigger a re-filing review.
- →Audit all training data sources immediately and document lawful acquisition, licensing status, and IP clearance for each dataset; establish an ongoing data governance procedure requiring legal sign-off before any new training data is ingested.
- →Deploy real-name user verification at onboarding and configure logging infrastructure to capture and retain user inputs alongside AI-generated outputs for a minimum of six months, with access controls limiting log retrieval to authorised personnel and regulatory requests.
- →Implement a content labelling mechanism that tags all AI-generated outputs as machine-generated in a format identifiable to both users and regulators, and integrate this into the product build rather than as a post-launch patch.
- →Stand up a formal user complaint and reporting channel with defined response timeframes, designate a named point of contact for CAC and co-issuing authority inspections, and conduct at least one tabletop exercise annually to verify readiness for regulatory cooperation requests.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Do foreign companies offering AI services accessible in China need to comply with the China Generative AI Regulations?
- Yes. The Measures apply to any organisation or individual providing generative AI services to users within China, regardless of where the provider is headquartered. Foreign companies whose products are accessible to Chinese users are subject to the same security assessment, algorithm registration, and content moderation obligations as domestic providers.
- What is the security assessment requirement before launching a generative AI service in China?
- Providers must complete a CAC security assessment and submit it for review before any public launch. This assessment covers potential public opinion influence and social mobilisation risks. Compliance teams should build 60 to 90 days into project timelines to accommodate the review process, as services cannot go live until the assessment is cleared.
- How long must providers retain logs of user inputs and AI-generated outputs under the China Generative AI Regulations?
- Providers must retain logs of user inputs and corresponding AI-generated outputs for a minimum of six months. Access to these logs must be controlled and made available to the CAC and co-issuing authorities upon request during inspections or investigations.
- What are the algorithm registration obligations under China's Generative AI Measures?
- All algorithms powering generative AI services must be filed through the CAC's algorithm registration system before public launch. Providers should maintain an internal registry of filed algorithms so that updates or new model versions trigger a review of whether re-filing is required.
- How do the China Generative AI Regulations relate to the earlier Algorithm Recommendation Regulations and Deep Synthesis Measures?
- The Generative AI Measures build on and sit alongside the 2022 Algorithm Recommendation Regulations and the 2022 Deep Synthesis Measures, forming a layered regulatory architecture. Providers subject to the Generative AI Measures may also have overlapping obligations under the earlier instruments depending on the features of their service.
- Are enterprise or closed-network generative AI deployments subject to the same compliance burden as public-facing services?
- No. The Measures include a proportionality principle that reduces the compliance burden for providers offering generative AI services exclusively within enterprise environments or closed networks. However, any service that extends to public-facing users triggers the full suite of obligations including security assessment, real-name registration, and content moderation requirements.
