AI tools as trustworthy public interest technologies under recent EU legislative instruments

by: in Law
Artificial Intelligence

Technological developments challenge consumer protection in the digital sphere. One adaptation that could make the digital environment become safer and more trustworthy is to provide consumers with explanations of AI-based algorithm mechanisms used by intermediary platforms.

Consumer protection in the context of employing AI-tools
Consumer protection is a key aspect of AI governance in the EU digital sphere. One of the underlying questions discussed during the 6 December 2022 Seminar on AI-Assisted Consumer was how the EU’s approach to regulating use and development of AI-tools in a trustworthy and human-centric way could be improved to better ensure online safety of consumers and EU citizens.

AI as public interest technology
AI as public interest technology encompasses an approach of developing and using these innovative tools ethically and fairly, where the benefits of AI can be widely shared in the private and public sectors. Developed as public interest technologies, AI-tools could bring about socio-economic benefits to a spectrum of industries and social activities and provide key competitive advantages to the European economy. Online platforms use AI-tools to both personalize service delivery and optimise e-commerce.

Unfair misleading practices by intermediaries
However, the same AI techniques may bring about certain negative consequences. Nowadays, intermediaries substantially influence the online infrastructure. For AI technologies to function in the public interest, the EU must aim to ensure that AI-tools on online platforms observe consumer rights and safety. Following the presentations by Constanca Rosca and Dr Joanna Strycharz, questions arose on the acceptable influence on consumer behaviour and delineation with unfair misleading practices by intermediaries. The discussed AI-based practices of dark patterns, scarcity bias, and dataveillance, pursuing primarily the commercial interests of online platforms, interfere with the trustworthiness of digital services relying on AI-tools.

The AI-tools on online platforms challenge the sufficiency of the current EU consumer protection safeguards in different ways. Consumers may encounter dark patterns and be misled into making potentially harmful decisions regarding the processing of their personal data. The creation of scarcity bias by AI leaves consumers under the false impression that a service or a product will be unavailable later if they do not purchase it soon (retailers or airlines using scarcity signals have been previously pointed out by publications in the field of consumer protection). Another AI-based practice, described as dataveillance, involves excessive, continuous, and sometimes unspecific processing of digital traces of users across platforms. Dr Monika Leszczyńska and Dr Joanna Strycharz elaborated upon how consumer behaviour is detrimentally affected when they perceive being traced online.

Such practices are also difficult to recognise. The lack of knowledge by consumers on how AI-based algorithms are deployed, interferes with their ability to hold businesses responsible when something goes wrong and thus hinders the reliability of the online environment. Consumers who are unable to apprehend how AI technologies are used, do not perceive the online infrastructure as safe and trustworthy. To advance the trustworthiness of AI-tools, these negative consequences need to be addressed.

Increasing transparency of AI processes
Regulating the utilization of AI-tools can improve consumer protection by addressing the risks which are now present. These dangers can be addressed by shaping AI-tools to function as public interest technologies. One adaptation may be to increase the transparency of AI processes. The ability to understand AI decisions would help ensure safety and fairness by allowing consumers to confirm that the AI algorithms adhere to regulations or ethical guidelines. As a result, AI technologies become more trustworthy.

Recent months saw significant developments in the EU in regulating the obligations of intermediary platforms employing AI-tools. The EU policy approach to AI governance evolves as AI technologies are becoming more widespread and its potential impact becomes clearer. The legislative instruments such as the General Data Protection Regulation, the Digital Services Act (DSA), or the proposed Artificial Intelligence Act, strengthen transparency obligations for data controllers, intermediaries of digital services, and providers of AI systems to explain how they use the data collected from users and how the AI-based algorithms make decisions. Through imposing transparency obligations, these instruments aims to increase trustworthiness of intermediaries and the deployment of AI.

Trustworthiness
Dr Vigjilenca Abazi suggested to rethink the regulatory avenues holding intermediaries accountable for their business models. The accountability regime envisaged by the DSA does not challenge the central role of intermediaries in the digital architecture. It improves consumer protection by providing avenues to be able to recognise previously hidden, potentially harmful use of AI-tools and hold intermediaries accountable when they do not address them. Consumers with access to comprehensible explanations can be more confident in the safety and reliability of these services. Additionally, the access to explanations of AI-based mechanisms allows researchers to analyse the available data and find methods to solve the risks of employing AI-tools. This access of national supervisory authorities and the public to information on the functioning of the AI-based algorithms is reflected in the DSA and the proposed AI Act. By aiming at increasing trustworthiness, the EU shapes AI-tools as public interest technologies.

However, the transparency approach has shortcomings in pursuing a genuinely safe environment for consumers. Where the amount of provided information exceeds consumers’ information processing capacities, the advantage of transparent explanations of AI algorithms would be impeded. It is therefore important to strategically adapt the online platforms interfaces to account for such considerations. While there is a growing consensus on general ethical AI principles, such as explainability and transparency, gaps remain regarding the implementation of these strategies. Substantial guidance on the issue exists, but the discrepancies between principles such as the Ethics Guidelines for Trustworthy AI and their implementation by regulators or the AI developers diminish their influence in practice.

By Anna Medvedova, 3rd year bachelor student European Law and Marble student of the project “The AI-assisted consumer” and Dr Anke Moerland, Associate Professor of Intellectual Property Law
 

Sources:

Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)

Regulation 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act)

Regulation 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act)

The European Consumer Organisation: Regulating AI To Protect The Consumer (2021) Position Paper on the AI Act

T. Züger and H. Asghari; AI for the public. How public interest theory shifts the discourse on AI (2022) AI & SOCIETY

G. Contissa, F. Lagioia , M. Lippi, H.W. Micklitz, P. Pałka, Giovanni Sartor; Towards Consumer-Empowering Artificial Intelligence (2018) International Joint Conference on Artificial Intelligence

N. Helberger, M. Sax, J. Strycharz, H.‑W. Micklitz; Choice Architectures in the Digital Economy: Towards a New Understanding of Digital Vulnerability (2022) Journal of Consumer Policy

Tags: