Ewa FABIAN: What if robots handle it better than lawyers?

What if robots handle it better than lawyers?

Photo of Ewa FABIAN


lawyer graduated from University of Warsaw, international law firms and Institute of Arbitration SCC

other articles by this author

Richard Susskind, author of the book “The End of Lawyers?: Rethinking the nature of legal services”, wrote also about robots: “The word ‘robot’, derived from the Czech word robota meaning ‘drudgery’ or ‘servitude’ is of more recent origin, first used in 1921 in a play, R.U.R. by the Czech author Karel Čapek”[i]. In this science fiction play, Čapek not only invented the word that we no longer can live without, but also identified a trend whereby acronyms are used to describe new technologies. Less than a century has passed and technology continues to be surrounded with the same aura of mystery as it was when it hatched mainly in human imagination.

Perspectives for the development of artificial intelligence

.It will be more and more difficult for us to stay up with the development of artificial intelligence. At international conferences concerning technology, we hear statements like: by the year 2035 artificial intelligence (AI) will be equivalent to human intelligence, by the year 2045 – the intelligence of all mankind combined. At the widely talked about conference I/O 2016 held this May, Google presented its newest technologies and products to an army of app developers. The presentation went along these lines: We already have the best voice recognition software, translator and search engine – this means we’re only one step away from integrating the system and starting to talk to it: ask and search with our voice either through the entire network or just our own files, ask for “intelligent” narrowing down of search results and solutions based on our preferences. Based on this approach we are being equipped with – among others – Google assistant or Google Home, adding to the trend of the Internet of Things.

In March of this year, a program developed by Google DeepMind – AlphaGo – won with the Go master Lee Sedol. It is said that this is a greater success than Deep Blue’s (IBM) winning with Garry Kasparov in 1997 due to the larger number of possible combinations and thus greater unpredictability of Chinese Go in comparison to chess. Red lights should now start lighting up – above all, how could I write that a computer “won” the game? And if it won, what of it?

I am though more interested not in the ambition and self-esteem of geniuses, but rather in the legal significance of a robot equipped with artificial intelligence that is able to learn – as this is what we are dealing with now. Will the robot be liable for damage? Will it have an estate from which it could cover damages and which it would manage autonomously? Will it take part in negotiations and conclude a settlement? Or maybe it will adjudicate the dispute instead of a judge (such as the so-called Online Dispute Resolution (ODR) systems)? Will it insure itself? Will a robot car cause less accidents than a car driven by a human? Will a robot provide better legal counsel? Will it prove a better surgeon? Will it be a more honest accountant or banker?

Issues and legal solutions

.Lawyers examining the essence of creativity have already presented their opinions regarding artificial intelligence (J. Barta, R. Markiewicz). This is because something may not be the direct outcome of human activities, but  arise from the use of a machine, an algorithm, partially or completely be the result of chance or come about because of an uncontrolled or untrained animal. It is assumed that human control (at the least specific assumptions of an algorithm) is necessary for the occurrence of copyright. Only a human being can be an author in that sense. However, professors do not rule out that “the development of so-called artificial intelligence or the application of biological computers using DNA or RNA molecules will force the need to revise views about the nature of creativity, yet this would require overcoming the principle that creative intellectual activity is appropriate only to a human being”. [ii] This moment has not yet arrived – or maybe it has? Thanks to the thoughts made on the background of such “soft” areas of law like copyright, we are closer to understanding where to draw the line between the situation when human control over machine is sufficient (and treated for example as the causation between conduct and the resulting damage) and the situation when such control does not prevail.

The emancipation of a machine

.When the link does not prevail because human control is less obvious, we may speak about the autonomity of the machine and about the potential legal void in the area of e.g. liability for damage. In accordance with current laws, neither a robot nor a supercomputer have legal capacity (thus neither rights nor obligations) or even a surrogate of such. An autonomous machine may generate a decision that causes damage and lawyers will then have to participate in meticulous court proceedings, commission expert opinions for astronomically high fees and wait years for a ruling. All of this to answer the question who is liable – the owner of the machine, the manufacturer, the software developer (and which software) or maybe someone else. Is tackling this matter like this sufficiently cost effective and socially fit? In technologies provided to the mass customer?

Krzysztof Wojdyło (Wardyński i Wspólnicy) recommends amending the law to adapt the rules of liability to damage caused by autonomous robots, preferably by assuming the form of strict liability (with a high standard of protection for the injured) and designing it in a way that minimizes doubts as to who is liable. He also suggests a register of robots together with compulsory insurance against civil liability caused by robots (analogous to vehicles) as well as setting threshold requirements for algorithms installed in the operating systems of autonomous robots, minimum safety requirements for the operating systems of such, mechanisms to discover possible changes in operating systems as well as restrictions on the procurement of autonomous robots (Robotics, report issued by the law office Wardyński i Wspólnicy, April, 2015).

More radical ideas are appearing across the Atlantic – would it not be easier if robots had some form of legal personality? David C. Vladeck argues that legal entities obtained legal capacity for some reason, thus wouldn’t it be easier to allow for the suing of autonomous robots before a court of law? If a machine is capable of learning, then how long is it controlled by its designers? [iii] In considering this, one might think about such acclaimed movies like “2001: A Space Odyssey” or “Blade Runner” – stories about machines which in spite of all safety mechanisms turn out to be capable of killing and (in the latter example) of higher emotions.

Autonomous cars, unmanned drones, autopilot

.The issue of ethics also arises in the context of the development of autonomous cars. What type of behavior should the machine be programmed to perform in the case of a dramatic situation in which a choice between life and life has to be made? In principle, humans are not capable of being programmed for such circumstances, with the outcome usually being the result of coincidence, sometimes instinct, a decision made in a fraction of a second. Machines on the other hand could theoretically use this time to carry out some form of calculations. This provokes more in-depth philosophic considerations which we can sometimes come across while analyzing the issue.

However, a lot would seem to indicate that machines would be, quite simply, better drivers than humans – they don’t drink, they don’t get tired, they don’t fall asleep, they are not distracted by emotions. If the law were to allow for the use of autonomous cars, who knows, maybe the insurance of such cars would be cheaper than insurance for cars driven by humans? However, at the present moment, an autopilot is only allowed to some extent in aviation (unmanned drones, airplane autopilot), with the first rulings concerning liability for faults in such systems already having been issued (Richardson vs. Bombardier).


.For some time now, international corporations have been investing in the development of technology that will change the lives of lawyers and other consulting professions. Systems that are able to analyze large databases – documents, emails and other unstructured data – to be used in court, regulatory proceedings or due diligence already exist (e.g. eBrevia). These systems may also be implemented as a preventative measure, discovering potential problems in the company before any litigation, inspection, investigation or transaction take place. The high costs of developing and operating such systems is not an obstacle to companies where the earlier discovery of intimidating information allows to maintain a high market position. Of course, there is also the issue of whether such technologies comply with the protection of personal data or labor laws, but that is a separate subject.

However, we are not only talking about the filtering of documents based on keywords or relationships between words. A system assisting lawyers at the preliminary stage of analyzing legal problems, researching legal databases and generating proposed solutions has already been produced, basing on the Watson supercomputer (IBM). The system is named ROSS and is already “employed” by at least one law office (Baker & Hosteler), specifically for bankruptcy matters. The program has also the ability to learn.

ROSS will most likely not be the only system assisting law offices. In 2015, the international law office Dentons backed NextLaw Labs which will attempt to integrate modern technology with the legal practice. The partner in this undertaking is once again IBM. Will the creation of technology able to replace lawyers during the preliminary analysis of large amounts of text (at present usually carried out by people using better and better legal databases) result in a lower demand for individuals performing the legal profession? It is hard to predict taking into consideration the amount of legal problems that may arise with the arrival of autonomous technology in our lives…

The future

.Artificial intelligence is slowly becoming a fact and regulations relating to it will undoubtedly require changes in law, preferably at an international level. Technology though has significantly fewer restrictions than law. In addition, lawyers are much more attached to conservatism (in a rather positive sense). But there’s no need to be pessimistic. If people are not able to decide where to put learning machines in the framework of law, maybe machines will do it for us?

Ewa Fabian

[i] “The word ‘robot’, derived from the Czech word robota, meaning ‘drudgery’ or ‘servitude’, is of more recent origin, first used in 1921, in a play, R.U.R., by the Czech author Karel Čapek.” R.Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts, tłumaczenie autorskie. [ii] J. Barta (red.), System Prawa Prywatnego. Prawo autorskie, Warszawa 2013, s. 82. [iii] D. C. Vladeck, Machines Without Principals: Liability Rules And Artificial Intelligence, Washington Law Review, vol. 89-117, s. 124.

This content is protected by copyright. Any further distribution without the authors permission is forbidden. 22/06/2016
Fot.: Shutterstock