Towards invention of Dworkin’s Hercules?
How would a world look like in which judicial decisions would not be taken by judges, but by intelligent machines? Or where, at least, those machines would serve as a crucial decision support for judges – or perhaps even simply law clerks – to take judicial decision?
On case prediction, automated judicial decision-making
Of course, imagining such a world is no longer science fiction; more and more papers, blogs, public discussions, interviews and books try to portray this as our future. Coupled with some apocalyptical ideas, some commentators, including prominent Stephen Hawking even believe that artificially intelligent machines would then take over the world and destroy the human race. The reality is, I believe, much less frightening than it is sometimes portrayed, in particular insofar as it comes to intelligent machines that would replace or complement judges.
Status quo and challenges
The research on prediction of judicial decisions on the basis of past decisions or other factors has a long-standing history. It is mostly computer scientists that have been delving into this field for decades, trying, on the one hand, to computerise legal reasoning and, on the other hand, predict the outcome of judicial decisions. For example, efforts have been made to create a software which processed a particular piece of legislation; the software then asked the user very specific questions to which the user had to respond with ‘yes’ or ‘no’ and a long series of questions led the computer system to a legal conclusion for the user’s case. One can imagine that such an approach could be useful only for simple cases based only on one piece of legislation, but not for complex cases that require interpretation of several legislative instruments.
Furthermore, a considerable amount of research has recently been conducted with regard to the prediction of outcome of cases. Be it the European Court of Human Rights or US federal or state jurisdictions, the research shows, with a stunning degree of accuracy, that the outcomes of cases can indeed be predicted. However, while this type of case prediction might have practical and theoretical value for the common law system following the doctrine of stare decisis, it is less clear how such case prediction would work in civil law jurisdictions where judges rely almost exclusively on legislation and where potential discrepancies between judgments of different (or even the same!) courts are much more likely. Case prediction and automated judicial decision-making would potentially be possible if the legislation would use so called ‘controlled natural language’, meaning the use of only limited amount of terms with precise pre-determined meaning. Using controlled natural language would enable to avoid any interpretative conundrum, not only because the notions would have a clear meaning, but also because – in order to avoid interpretative openness of notion – any potential example of application of legislation would be foreseen by law. We can only imagine the unbearable length of such legislative instruments seeking to legislate with precision every single example of its use.
General Data Protection Regulation
But would such automated judicial decision-making even be allowed in the EU? According to the General Data Protection Regulation (GDPR), ‘automated individual decision-making’ is ‘a decision based solely on automated processing’ (Article 22). This same provision gives the data subject ‘the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’ It is rather clear that any judicial decision regarding a particular data subject would have a legal effect for this data subject which, in turn, has the right to refuse such a decision. According to Article 22 GDPR, such automated decision would be allowed only if it is necessary for entering into or performance of a contract between the data subject and a data controller (which a judicial decision is not), if it is authorised by Union or Member State law (which, to the knowledge of the author, is so far not the case) or if it is based on data subject’s explicit consent which seems to be the only possible applicable option. The provision of the Directive 2016/680 on Data Protection in Criminal Matters is even stricter on this issue as it prohibits automated decision-making altogether unless it is ‘authorised by Union or Member State law’ (Article 11).
Therefore, under the GDPR, if the data subject files a lawsuit against data controller or if it challenges a decision of a Data Protection Authority, (s)he would need to give explicit consent for automated judicial decision issued in such judicial proceedings. This brings us to the conclusion that the drafters of the GDPR obviously did not have judicial decisions in mind when regulating automated decision-making; a balanced provision would have required consent of both parties to the proceedings. In any event, judicial decisions could still benefit from automated decision support to which the GDPR would not be opposed as Article 22 regulated only automated decisions based solely on automated processing. In practice this means that, as long as it is a human judge taking the final decision, automated decision support could help the judge to take the decision.
The whole debate on artificial intelligence replacing human judges reminds me of Dworkin’s almighty Hercules, a judge so powerful that he was able to always find the ‘right answer’ to the case. However, those of us who are non-believers in Hercules and in only one ‘correct’ decision in a judicial case, would beware of such an almighty intelligent machine. Judicial decision-making is, after all, a process of balancing, weighing, consideration of several potential decisions, interpretation, and even creativity if a case is not strictly covered by the black letter of the law. The old Data Protection Directive 95/46 is a typical example of a legislation permitting several directions in one single case and a legislation where judges needed to rely on their creative teleological interpretative skills. In the long-lasting debate between Dworkin and Hart on whether there is a ‘right answer’ to a case, I would therefore position myself more on the side of the latter than the former, rejecting the idea on Hercules. This, however, does not mean, that I absolutely reject possibility of the use of intelligent machines in judicial decision-making. To the contrary. Nevertheless, the use of AI in this process should be limited to what a machine can do: analyse, mine the text, group similar decisions and arguments, make legal predictions on the bases of analysis of former cases – in other words, help the judge not to overlook a provision, a precedent or an argument. But the decision itself, embedded in the interpretation and new elements of the case, should ultimately be taken by a judge. To use an example of the Court of justice of the EU: an easy three chamber case resembling previous judicial decisions of this court could easily benefit from an AI case prediction. On the other hand, a complex grand chamber judicial decision such as C‑362/14, Schrems would finally need to be decided by judges themselves. As long as the AI is not portrayed nor used as Dworkin’s Hercules, it could be beneficial for judicial process. But it should never be used to replace the judge.
This blog was written by Maja Brkan and has been published as an editorial to the case note section of European Data Protection Law Review (Issue 2/2017) which we wholeheartedly invite you to read.
Published on Law Blogs Maastricht