What legal risks should be looked at by an in-house counsel when dealing with an AI tool
Global | Publication | March 2023
The launch of ChatGPT in November 2022,1 a chatbot developed by OpenAI and built on top of the GPT-3 family of large language models (LLMs), has sparked interest in this cutting-edge technology across all sectors. As more people recognise its power due to its general purpose (not designed for a specific task or context) and its human-like engagement in conversational dialogue, it is achieving unprecedented traction.
Prior to ChatGPT, there were already a few successful applications designed for specific tasks, based on the underlying models like GPT family, catering to specific needs in different domains. For instance, Jasper's text-generated products2 have boosted marketers' content creation.
Despite controversies on whether generic AI will be adopted in a wider dimension and dominate the market share of custom-trained applications, we believe there will be a boom in the creation and deployment of AI tools that take advantage of the advancement of LLMs. Some currently encountered use cases include chatbots, search engines, summarisation, content creation (e.g., code, emails and documents) and language translation3 (Intended Scope).
While many complex legal issues arise on the development, commercialisation, and deployment of AI technologies, we intend to address certain key issues in our article series. This first article provides a high-level roadmap and explores the legal aspects that in-house counsel should consider when the enterprise is procuring LLM-based AI tools from a third-party service provider (Service Provider) for the Intended Scope to empower employees and improve the efficiency of current workflows within a company.
Under the PRC Legal Framework
China has a unique regulatory approach to developing and distributing AI technologies, which can essentially be broken down into software and data from a lawyer’s perspective. Any market player rolling out AI technologies within China is expected and required to comply with the current legal framework.
This framework includes general legislation governing cyberspace (including, inter alia, the Cybersecurity Law,4 the Data Security Law,5 the Personal Information Protection Law,6 and the Administrative Measures for Internet Information Services7) and specific regulations targeting the AI area (including, among others, Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services8 and Provisions on the Administration of Deep Synthesis of Internet-based Information Services9).
As the current AI technologies are not explicitly covered by China’s accession to the WTO,10 we anticipate regulatory challenges for AI services that are administered and operated offshore (such as ChatGPT11) but delivered to the Chinese market on a cross-border basis. Also, legal issues on cross-border data transfers (i.e., outbound transfer in the foregoing case) will further complicate the feasibility of a cross-border delivery model. In terms of localised service delivering, we have seen that local tech giants and start-ups are racing to launch their own AI-powered products.12
Qualification of the Service Provider
When an enterprise conducts due diligence on a Service Provider, one key question is whether the Service Provider is legally permitted to provide the requested AI tool. Depending on the nature and characteristics of the service offering, the Service Provider may be subject to a corresponding licensing regime.
If the service offering includes the functionalities captured by the Telecom Business Classification Catalogue,13 such as information search and real-time interaction, the enterprise may need to obtain a value-added telecom service licence issued by the competent administration of communications covering information services. The Provisions on the Administration of Deep Synthesis of Internet-based Information Services specifically requires that online publishing services, online cultural activities, and online audio-visual program services comply with the respective regulatory requirements.
In-house counsel should also consider whether the Service Provider has implemented comprehensive and robust security systems to safeguard the infrastructure and the models/data running on them. Such an assessment may cover compliance with the regulatory requirements under the Cybersecurity Law (e.g., implementing a network security-graded protection system), applicable national standards14 and sectoral standards15/best practice.
Contracting Issues
Many enterprises are increasingly using cloud-based software (SaaS), and the relevant legal issues may not be unfamiliar to in-house counsel. It is anticipated that the method of delivering AI tools will largely remain the same. Specifically, B2B AI tools will be provided to the enterprise on a subscription basis in the cloud. In certain circumstances, an on-premises licence may still be a feasible option for enterprises with special needs. In-house counsel would need to review thoroughly the licensing terms to ensure that the rights and obligations between the provider/licensor and customer/licensee are reasonably allocated.
On the other hand, B2C AI services will likely remain accessible to individual subscribers through click-accept licensing terms.
Data protection
The issues on cross-border data transfers are out of scope for the purposes of this article (as the data is likely to need to be localised under applicable Chinese laws). In-house counsel should, however, be mindful of the potential data issues involved in the entire workflow.
If the data fed to the AI tool (as prompts) contain personal data, then it is necessary to consider whether a robust data policy is in place to collect, transfer, process and store such data. It is critical to clarify whether such information will be utilised by the Service Providers to pre-train or fine-tune the underlying AI models. If so, either explicit consent should be obtained or data masking (anonymisation) undertaken.
Additionally, both the business and legal teams should carefully consider whether information relating to the enterprise's business (which may qualify as trade secrets) may be fed to the AI tool and how to address the risks (if any, e.g. loss of trade secrets) to the business.
The legal framework governing data protection and usage is constantly evolving. Accordingly, it is important to keep track of developments in the regulatory landscape.16
Cybersecurity
It is important to be aware of the risks of a malware attack and data breach when using in-cloud AI services. In addition to assessing the reliability of the Service Provider as indicated above, other common approaches to mitigate these risks include strengthening the access security (e.g., by implementing strong passwords and enabling multi-factor authentication), monitoring suspicious account activities and regularly updating software and systems (with upgraded security patches to address known vulnerabilities).
Intellectual Property
One hotly debated issue on generative AI technology is whether copyright can subsist in content generated by, or with the assistance of, the AI tool. This is especially relevant for enterprises engaged in content creation who may be struggling with copyright issues, such as marketing companies.
The question of whether copyright can subsist and will be recognised under PRC law, and who will own it, may well depend on the facts. The copyright implications of content generated by an AI tool with little or no human intervention are different from those where the AI tool simply makes cosmetic changes to the user's writing.
Another important factor to consider is the risk of copyright or trade mark infringement. As AI models are typically trained using data scraped from the internet, and are opaque to users, it can be difficult to rule out the risk that content generated by AI could potentially infringe existing copyright or trade marks of a third party.17
Similarly, there is a risk that a user's input (which might contain the user’s own intellectual property rights, such as copyright) may be misused by other users of the same AI tools, or by tool developers, when they use the same underlying AI models via API.18
Bias and Discrimination
Bias and discrimination are common ethical concerns around AI technologies.19 For instance, if the enterprise uses an AI tool to screen employee candidates, the tool may make decisions that exclude certain demographics based on its learning from historical data that might itself be biased or discriminatory.20 Employment discrimination is prohibited by the Employment Promotion Law,21 so this may result in legal liability for the enterprise.
The so-called “black box” problem in machine learning makes it difficult for a human to understand and monitor the decision-making process in the algorithm.22 Researchers are developing explainable AI techniques to increase the transparency of AI systems in order to enable humans to understand how a particular decision was arrived at.23 Additionally, constraints can be put in place by the Service Providers and/or the enterprise user to make generative use cases safer, which generally include system designs that provide for the following:24
- Keeping a human in the loop.
- End user access restrictions.
- Post-processing of outputs.
- Content filtration.
- Input/output length limitations.
- Active monitoring.
- Topicality limitations.
A practical way for enterprises with current technical constraints to mitigate risks in this regard is to ensure that the Service Provider has ethical guidelines and principles for the development and deployment of AI systems in place.25 Such guidelines should be consistent with the moral values upheld by the enterprise.
Unfair Competition
The power of AI models may be used by certain business operators to undermine competitors' protected interests and to gain unfair advantages. Therefore we expect that the framework of competition and anti-trust laws will also be highly relevant to the development and deployment of AI technologies.
On the one hand, Service Providers themselves may be subject to regulatory scrutiny, depending on the circumstances. For example, a picture-creation business operator scrapes picture data from a competitor's data pool to fine-tune its own model, resulting in a reduction of the competitor’s market share. Such data scraping may be deemed to be a violation of the Anti-unfair Competition Law.26
On the other hand, depending on the role of the AI tools in the workflow, enterprise users of the relevant services may face challenges from regulators or competitors regarding the nature of the work product generated by the AI tool, which is in turn incorporated into their products and services for downstream customers.
Again, the black box problem may pose challenges in monitoring the generated content to ensure that it complies with regulatory parameters (e.g., not creating or disseminating false or misleading information about competitors).27 This would also complicate the issue of determining the "malicious intent" of the AI tool user concerned, which is a constituent element for certain unfair practices.28
Johnny Liu has also contributed to the article.