Artificial Intelligence: Risk or NotIn case you missed it, 2014 is being hailed as the year we awoke to artificial intelligence as a risk to humanity. Dr. Stephen Hawking and Elon Musk both made very strong statements (and Musk tweeted) about the potential risk and those statements were widely repeated in the press. As you can imagine, many stories followed in which one or both were featured as authoritative sources. They are not alone (nor was 2014 the first year in which this potential risk was raised).*

What relevance does all this alarm raising have for lawyers? Well, quite a lot actually. Before we cover those topics, let’s narrow the scope of what we mean by AI. Today, though AI work has progressed significantly in the last 50 years and especially in the last 10 years, we are still far from having a computer with an intelligence level equal to a human (a level commonly called artificial general intelligence or AGI). Estimates of when we will see AGI vary from as soon as the next 10 years to as far off as 2100 (or even never). Many of the estimates place the date somewhere between 2030 and 2050. So, for this post, I am talking broadly about computer intelligence at its current level through the moment when we achieve AGI. I’ll use the abbreviation AI to cover this range of software. Most notably, I’ve excluded artificial superintelligence (ASI), which defines software that exceeds the level of human intelligence.

AI and Lawyers

We can break the AI risk topics of interest to lawyers into two categories. In the first category, we have AI as it impacts legal services delivery. In the second category, we have AI as it impacts substantive law.

Let’s talk about the substantive law category first. AI raises many substantive law questions. For example, we now have software that can write new code or re-code itself. As the software does so, who is responsible for what the software does? We also have software interacting with other software, and doing so in ways that humans can’t follow. That is, we can’t reverse engineer what happened when something goes wrong. Who is responsible when something does go wrong? As we let computers make decisions that humans made, and as the computers can do so not by following programs humans wrote but by developing their own programs, what happens to the concept of causality? How do we handle situations where the computer software resides outside the country where the harm occurred? The list of questions is long and the questions are complicated, but we are just beginning to work through what to do.

The second category is legal services delivery. As computers become more powerful, we will transfer some work done by lawyers to computers. What will we mean when we say lawyers must supervise the software? How does a lawyer effectively supervise a program that can analyze millions of cases and articles to provide a suggested course of action? What role will lawyers play as software takes over steps from lawyers? Is there a line to draw between humans practicing law and computers practicing law?

Hawking, Musk and others are raising alarms about the biggest question: what happens when AI equals or even exceeds human intelligence? They posit that AGI enabled computers could (without proper controls) take over the world from humans. The most extreme risk from AI is called “existential risk,” a world in which computers eliminate the human race. For some, these are scary stories not based in reality. In his recent post, Alan Rothman summarizes three recent articles that suggest the fears of AI taking over the world are not well founded or at least are overstated. Perhaps the best known supporter of AI is Ray Kurzweil, a prolific inventor, entrepreneur, visionary, long time fan of AI and currently a Director of Engineering at Google where he heads up a team developing machine intelligence and natural language understanding. Kurzweil looks forward to a world where AI works in harmony with humans, greatly enhancing our existence.

Lawyers Should An Active Role in Addressing AI Issues

Wherever you fall on the spectrum of those who believe or don’t believe in AI risks, it is clear that 2014 has been the year when the issue was brought to the forefront. From my perspective, lawyers should play an active role in analyzing the issues and devising solutions. Right now, as many have noted, lawyers are almost absent from the discussion. As Judge Richard Posner pointed out in his 2004 book Catastrophe: Risk and Response, lawyers should not leave these issues to others. Rather, Judge Posner said (Catastrophe, Kindle Edition at Loc. 122-127):

The challenge of managing science and technology in relation to the catastrophic risks is an enormous one, and if it can be met it will be by a mosaic of institutional arrangements, analytical procedures, regulatory measures, and professional skills. I am particularly interested in determining the positions that law, policy analysis, and the social sciences should occupy in that mosaic. At present, none of these fields, with the principal exception of economic analysis of global warnings, is taking the catastrophic risks seriously and addressing them constructively.

If 2014 is the year the AI issues were publicly raised, then 2015 should be the year lawyers become engaged.

 

* A list of scientists and others raising concerns is far too long to include in a post, but it includes: Stuart Russell (Professor of Computer Science and Professor of Engineering, University of California, Berkeley), Max Tegmark (Professor of Physics, MIT), Frank Wilczek (Professor of Physics, MIT, and 2004 Nobel Laureate in Physics), and Nick Bostrom, (Professor, Faculty of Philosophy & Oxford Martin School, Director of Future of Humanity Institute, Director of Programme on the Impacts of Future Technology University of Oxford). If you are interested in AI and the arguments about it as an existential risk, check out:

Cambridge Centre for the Study of Existential Risk

Future of Humanity Institute

Machine Intelligence Research Institute

Future of Life Institute

Singularity University