In response to the rapid development of artificial intelligence (AI) technologies, the Cyberspace Administration of China (the CAC) recently issued two draft regulations for public consultation: Measures for Labelling Artificial Intelligence-Generated or Synthetic Content (the Draft AI Labelling Measures) and Cybersecurity technology—Labelling method for content generated by artificial intelligence (the Draft Labelling Method Standard). The Draft Labelling Method Standard is a mandatory national standard which serves as the supporting and implementing rule of the AI Labelling Measures. Both regulations aim to address deepfake-related risks and ensure the authenticity and credibility of publicly available information. 

Here are the key takeaways:

  • Applicable subjects and scope: The Draft AI Labelling Measures primarily apply to internet information service providers (the Service Providers), including AI content generation and online content dissemination service providers. Entities and institutions that do not provide services to the public in Mainland China are excluded from their scope. However, service providers outside China targeting the public in Mainland China are likely to be captured under these regulations. 
  • Service Provider’s labelling obligations: The Draft AI Labelling Measures define two types of labelling: explicit labels and implicit labels. Explicit labels are perceptible to users, displayed as text, sound, images or other forms. Implicit labels, on the other hand, are embedded in the metadata of AI-generated content and can only be extracted through technical means, hence remaining invisible to users. The Draft Labelling Measures outline scenarios where either explicit labels or implicit labels are required. The Draft Labelling Method Standard provides detailed and context-specific guidelines for implementing both types of labels, which the Service Providers must follow.
  • Additional obligations for Service Providers: In addition to labelling, the Draft AI Labelling Measures impose further obligations on Service Providers. These include, amongst others, incorporating relevant provisions in respect of labelling into user service agreements and submitting label-related materials when undergoing algorithms filing or security assessment procedures.
  • Distinguished labelling scenarios: The Draft AI Labelling Measures differentiate between content that is definitely, likely or suspected to be AI-generated. Different actions are required based on these distinctions. For example, if the file metadata contains an implicit label, the online information dissemination service provider must add a prominent label to inform users that the content is AI-generated.
  • Obligations of other regulated parties: App stores and users are also subject to labelling obligations. App stores must verify whether Service Providers have enabled the labelling function before allowing an app to be made publicly available. Users uploading AI-generated content to online platforms are required to actively declare and use the labelling function provided by the platform.
  • Penalties for violations: While the Draft AI Labelling Measures do not specify penalties for non-compliance, enforcement is left to the CAC, which may refer to relevant laws, administrative regulations and departmental rules to impose penalties.

Our Take

Aligning with China’s stated commitment to playing a leading role in global AI governance, once adopted, these regulations are designed to enhance the traceability and transparency of AI-generated content, reduce the spread of false or misleading information and better protect the rights of content creators and the general public. We expect that the Draft AI Labelling Measures and the Draft Labelling Method Standard will accelerate the development of AI detection technologies in China.

Contacts

Senior Associate, Corporate and M&A
Associate, Cybersecurity and data privacy
Head of Shanghai Pacific Legal, Partner, Intellectual property, Antitrust and competition