The timeline of e-personhood: a hasty assumption or a realistic challenge?
E-personhood is a term proposed in a draft report by the EU Parliament, about civil rules and laws on Robotics. This legal status aims at ensuring rights and responsibilities for the most capable AI agents. An intense debate about its usefulness is taking place in the EU.
In January 2017, the EU Parliament’s Legal Affairs Committee published a report regarding “electronic personalities” for AI agents and self-learning robots. The first response to this controversial issue was an open letter by AI experts from various EU Member States to the European Commission, revealing their dissent both from a legal and ethical perspective. In addition, the recent launch of the European Commission’s future strategy on AI could be seen as a “blocking wall”, at least for now, for further discussions of e-personhood at EU level.
Report by the EU Parliament
The EU Parliament’s report triggered a discussion on the numerous challenges that the introduction of AI robots in our lives raises. Some of the relevant questions are: Could self-learning robots be responsible for their actions? Could they be held liable for hurting people or damaging property? How should the IP of an invention developed by an AI agent be protected? Is the attribution of “electronic personality” the answer? The report proposes several solutions, like providing ethical guidance for production and use to manufacturers, creating an EU agency for Robotics and AI, introducing a mandatory insurance scheme as well as a reporting system for the contribution of AI agents to companies’ economic results, etc.
Importantly, it also proposes the introduction of “electronic personality”, which has become the controversial subject of the report. Despite the vagueness of this concept, the report emphasizes that e-personhood could reflect a legal status similar to the one applied to companies. It would create specific legal rights and responsibilities for AI agents, but by no means aims at giving human rights to robots. Moreover, the text invited further suggestions about the legal status. Having the limits of e-personhood in mind, the pioneer of the movement, Mady Delvaux stated that the aim of the proposal was to initiate discussions around robot liability issues, rather than suggesting that the idea of e-personhood is the optimal solution.
The open letter response
The report raised strong opposition, and an open letter signed by more than 150 European AI experts, scientists, academics, politicians and industry leaders was addressed to the European Commission. They pointed out that the legal, economic, societal and ethical contribution of AI and Robotics should be considered without bias, especially concerning the overvaluation of the actual capabilities and the tactile distance of AI reaching the level of “general knowledge”. They elaborated that the attribution of legal personality to robots seems to be a suboptimal solution from an ethical and legal perspective as neither a natural person model nor a legal entity model would solve the AI liability issue. Also, many academics, such as Nathalie Navejans and Noel Sharkey, stated that by seeking legal personhood for robots, the creation of safe harbors, where manufacturers are able to circumvent liability from their creations, is undesirable.
Re-evaluation of e-personhood
The inability of the legal system in dealing with liability issues arising from AIs could suggest that the concept of e-personhood may need to be re-evaluated and from my perspective a new legal category may be required to respond effectively to AI legal challenges. Under the concept of a “legal person”, AI agents or, at least those with a substantial degree of autonomy, could possess a legal status which would be only characterized by symbolic meaning and could be seen as a bundle of all the various responsibilities of the relevant parties (users, producers etc.). A registration system could be implemented, and the operating AI agents would have their relevant parties, as well as their detailed profiling, visible and recorded. The example presented in the Parliament’s report of the compulsory insurance scheme that could be fed by the wealth of a robot, could work under this company-like legal status. The potential damage and failures caused by the robots could be solely funded by the money they produce.
In the end, this alternative solution corresponds to the underlying rationale of the report, which according to Delvaux is: “The idea behind coming up with an electronic personality was not about giving human rights to robots — but to make sure that a robot is and will remain a machine with a human backing it.”
|Written by Konstantinos Amoiridis for Law Blogs Maastricht|