rotate
Please rotate your device
Switch the device to portrait mode for a better experience

Chinese government introduces new measures requiring platforms to label AI-generated content

Time:Apr 02 2025 Author:ZHU Zhigang/Paul Ranjard

02 April 2025, Zhigang Zhu and Paul Ranjard, first published by IAM


On 7 March 2025, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security and the National Radio and Television Administration jointly issued the Measures for the Identification of AI-Generated Synthetic Content, which will take effect on 1 September 2025.


There seems to be no limit to what AI can do. In seconds, it can conduct research and provide answers to any question, summarise books, invent stories and produce images and music. The results are so convincing that it can be difficult to determine if something has been created by humans or AI.


This can present a danger to the public, since deepfakes could be produced that are essentially impossible to detect. The Chinese government is taking steps to address this issue.


Aim and requirements of the measures


The measures are issued in accordance with several laws and regulations:


the Cybersecurity Law;

the Regulations on the Management of Algorithmic Recommendations for Internet Information Services;

the Regulations on the Management of Deep Synthesis for Internet Information Services; and

the Interim Measures for the Administration of Generative Artificial Intelligence Services.


The objective is to inform the public when content – text, audio, video or graphical – has been generated or synthesised using AI technology.


This means that AI service providers (eg, DeepSeek or Midjourney) must insert an explicit warning notice or label that informs people about the AI-generated nature of the text, image or audio content. Depending on the content, the label may take a specific form, but in all cases, it must be prominently featured and visible at the beginning, middle or end of the content.


In addition, service providers must insert an “implicit label” in the form of a digital watermark embedded in the file header, containing technical information such as the file’s source, as well as the code and identification number of the organisation.


Providers that offer online content dissemination services (eg, TikTok or Weibo) must verify whether the content published on their platforms is generated by AI, and if so, they must then clearly mark it with the prominent advisory.


Even users who publish AI-generated content via these platforms should proactively declare when the content has been created using AI and use the relevant provider’s labelling features.


Further, removing, concealing, altering or forging labels or providing tools or services to facilitate such activities is prohibited.


Unresolved issues and potential implications


Some concerns remain. First, it is unclear whether content that has been modified manually after being generated by AI still requires labelling – and this scenario is common. AI is frequently used as a tool to quickly create a first draft, which the user then modifies, resulting in the final form that gets published. Therefore, the question is where the boundary lies between AI-generated text and human-generated text.


Second, the measures will only prove to be efficient if clear sanctions are provided. Unfortunately, penalty provisions are formulated in general terms (eg, "punished according to the law") and rely on higher-level statutes for enforcement – such as the Civil Code, the Anti-Unfair Competition Law and IP laws – which may lead to a lack of cohesion when it comes to determining legal liability.


They could also have significant implications for IP strategies and policy. In an era where AI-generated content increasingly blurs the line between human creativity and machine assistance, companies might need to reassess their IP frameworks to better protect both human-generated and AI-assisted works. This could lead to a reevaluation of copyright eligibility, licensing agreements and enforcement mechanisms, potentially prompting policy makers to update existing IP regulations to address these emerging challenges.


Regardless, the issuance of the measures marks another welcome step by the Chinese government in regulating AI. It is hoped that, based on extensive practical experience, China will continue to enhance its governance framework through legal upgrades (eg, enacting dedicated AI laws) and the development of detailed supporting standards, thereby addressing the current issues of fragmented and scenario-specific legislation.