AI-assisted consumer: Is the proposed European Artificial Intelligence Act ready to embrace ChatGPT?

by:
  • Shu Wang
in Law
ChatGPT

ChatGPT’s rapid virality sparks both enthusiasm for using the product and concerns about consumer protection. Protecting consumers in the age of AI was also a central topic at the AI-Assisted consumer seminar, co-organized by MaRBLe, GLaw-Net, and IGIR.

ChatGPT, powered by a Large Language Model (LLM), is a general-purpose AI (GPAI) that can be deployed for many uses. Currently under first reading, the Artificial Intelligence Act (AIA) may not be fully prepared to embrace the surge of ChatGPT based on three primary reasons: exclusion of GPAI, inappropriate obligations shifting, and unclear or low transparency level.

Exclusion of GPAI
Adopting a risk-based approach, the AIA categorizes AI systems into four tiers: unacceptable risk, high risk, low, and minimal risk, imposing different obligations on developers. The classification of an AI system as high-risk hinges on the AI system’s "intended purpose" (Article 3(12) of the AIA). An AI system is categorized as high-risk if it is intended to be used in one system listed in Annex III (Article 6(2) of the AIA) or is intended as a safety element, or is a product under specific legislation in Annex II, and it necessitates third-party evaluation before market entry or usage per Annex II (Article 6(1) of the AIA). Nevertheless, such a purpose-based approach does not appear appropriate for the application of ChatGPT. As a GPAI, ChatGPT is not tailored to a specific purpose; instead, its versatility allows for countless applications, ranging from medical treatment to legal consulting. For GPAI, determining its exact purpose proves challenging. Moreover, it becomes all too easy for developers to opt for a purpose that does not constitute as high-risk, thereby circumventing potential regulation and infringing consumer rights.

Fortunately, scholars, the Council, and other lawmakers like  Dragoș Tudorache and Brando Benifei have noticed this problem and proposed introducing GPAI into the AIA. Whether this suggestion will be followed deserves close attention in the future.

Inappropriate obligation shifting
Despite the potential inclusion of GPAI in the AIA, the purpose-based approach remains a thorny issue when examining the value chain. Given GPT-4’s excellent performance in professional exams, it would not be surprising that this model will be extensively employed via Application Programming Interface (API) in sectors that may eventually impact individuals’ health, legal standing, and commercial situations, among other spheres.

Yet, when a distributor alters the purpose of a high-risk AI system, the obligations from upstream developers, such as OpenAI, are shifted to downstream distributors utilizing ChatGPT via API (Articles 28(1)(b) and 28(2) of the AIA). If the AIA provisions remain unchanged, downstream developers altering the general purpose of ChatGPT to a specific one would inevitably take on the role of provider, effectively exempting OpenAI from provider obligations. The crux of the matter is that the new provider may not even possess an AI system, let alone have the most comprehensive knowledge of the AI system in question and offer sufficient consumer protection.

Unclear transparency level
In addition to the problem of a purpose-based approach regarding GPAI, the AIA also faces challenges concerning the transparency level due to the ‘black box’ nature of deep learning and the inability of developers to explain mechanisms and emergent abilities of models used by ChatGPT.

A technical report by the Joint Research Center of the European Commission outlines three transparency levels: implementation, specifications, and interpretability. Implementation demands disclosure of technical principles and details, such as algorithmic weights and thresholds, where the developer can provide causal explanations. The specifications provide insights into a model’s life cycle. Interpretability means understanding the fundamental mechanism of the model.

ChatGPT applies deep learning, which inherently grapples with the issue of the black box. According to Recital 47 and Article 13 of the AIA, high-risk AI systems must maintain a degree of transparency, allowing users to “interpret the system’s output and use it appropriately.” In practice, on the one hand, it seems impossible for the AIA to enforce the level of implementation, as it would exclude numerous deep learning AI systems, such as ChatGPT. On the other hand, the phrase the user can “interpret the system’s output and use it appropriately” appears ambiguous, leaving consumers uncertain about the exact level of explanation.

What’s even worse is that developers may not be able to provide a level of interpretability for ChatGPT. Researchers have observed LLM’s emergent abilities, whereby new capabilities not designed by developers arise when the model reaches a certain scale. But they do not know the threshold needed for large-scale models, or how new abilities emerge. Researchers have also acknowledged their incomplete understanding of the mechanism, failure time, and capacities of GPT-3, which has 175 billion parameters in a 96-layer. While OpenAI has not disclosed the specific data volume and layers for GPT-4, it is reasonable to assume that these figures have increased, thereby further exacerbating the challenge of providing explanations.

Conclusion
It seems that the AIA is not fully prepared to handle AI systems like ChatGPT effectively. To safeguard consumer protection, addressing issues of GPAI exclusion, obligations shifting, and transparency level is crucial and worth close attention when the AIA takes effect.

By Dr Anke Moerland, Associate Professor of Intellectual Property Law in the European and International Law Department, and Shu Wang, 3rd year bachelor student European Law and Marble student of the project “The AI-assisted consumer”

Sources:                                            
Hamon R et al., ‘Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making’ [2022] IEEE Computational Intelligence Magazine

Jacobs M and Simon J, ‘Assigning Obligations in AI Regulation: A Discussion of Two Frameworks Proposed by the European Commission’ [2022] Digital Society

Kaminski M E., ‘The Right to Explanation, Explained’ [2019] Berkeley Technology Law Journal

Kraljevic Z et al., ‘MedGPT: Medical Concept Prediction from Clinical Narratives’ (2021) ArXiv < https://arxiv.org/abs/2107.03134 > accessed 21 April 2023

OpenAI, ‘GPT-4 Technical Report’(2023) ArXiv <https://arxiv.org/abs/2303.08774> accessed 21 April 2023

Tags: