By Steven Fettig. Taken at Meijer Gardens and Sculpture Park, Grand Rapids, Michigan.
By Steven Fettig. Taken at Meijer Gardens and Sculpture Park, Grand Rapids, Michigan.

In 2002, reporters asked Secretary of Defense Donal Rumsfeld a question at a U.S. Department of Defense news briefing. In answering, he set up a taxonomy that has become popular to catalogue our state of knowledge. In the Rumsfeld Taxonomy, there are things we know, things we don’t know, and things we don’t know we don’t know. In the Secretary’s words, “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns.” The last category, as Secretary Rumsfeld noted, is the most intriguing.

Scientists Discover Tacit Knowledge

Michael Polyani was a Hungarian physical chemist. He studied in Budapest and Karlsruhe, Germany, but WW I interrupted his studies. He served as a medical officer during the war and, during a sick-leave, managed to write his PhD thesis (encouraged by Albert Einstein). He received his PhD from the University of Budapest after the war.

After teaching for years in Hungary, he emigrated to Germany and then found his way to the University of Manchester. With the turmoil in Europe, his interests had shifted from chemistry to economics. The University accommodated the shift by creating a chair for him in Social Science which he held until he retired from his distinguished career in 1958.

Years before he retired, Polanyi gave the Gifford Lectures at the University of Aberdeen. He published a revised version of his Lectures in 1958 as the book, Personal Knowledge. In the Lectures and book, Polanyi argues that all knowledge relies on personal judgments. That is, he argued, one cannot reduce knowledge to a set of rules. Polanyi’s views countered those of his friend Alan Turing and were the basis for some early critiques of work in artificial intelligence.

Polanyi extended this idea of personal judgments to a concept he called “tacit knowledge.” According to Polanyi, we experience the world both through sensory data and through other knowledge—tacit knowledge. Tacit knowledge includes things we aren’t aware we know, but which play an essential role in our lives and work.

Polanyi’s ideas have been the subject of much research. That research helped explain a problem that has bedeviled scientists for many years. As any high school student who has taken a science class knows, one of the bedrocks of science is the idea of the repeatable experiment. Scientist A conducts an experiment which yields results that meet a basic significance test. She publishes her results in a journal. Scientist B wants to extend Scientist A’s work. To get started, Scientist B tries to replicate Scientist A’s results. B runs the experiment as described in the Journal article, but gets results different from A. Were A’s results a fluke? Were B’s results a fluke? After many attempts, B and scientists C, D, and E are unable to repeat A’s results. Now what?

At first you might think such an outcome uncommon. Scientists publish in peer-reviewed journals. We assume that by the time an article makes it into print, the results it reports aren’t a fluke. Scientist A may have repeated her experiment several times before publishing to make sure her first results were not a fluke. The peer reviewers would catch any flaws in what she did. The data is public. So, absent fraud, we think A’s results are reliable. In fact, scientists still struggle with unrepeatable results. Why can’t anyone repeat them?

This is where Polanyi’s theory comes into play. Under the tacit knowledge theory, the steps in the journal article are not sufficient for other scientists to replicate the experiment. The missing element is tacit knowledge. In the case of A’s research, she has some tacit knowledge necessary to make the experiment work. Tacit knowledge goes beyond failure to create detailed instructions. It includes knowledge the person can’t articulate.

Science and the Unknown Unknowns

It is the time of the Cold War. Russian researchers led by Vladimir Braginsky at Moscow State University are working on ways to detect and measure gravitational waves. Measuring these waves is a big deal—you may recall seeing articles in 2016 describing how scientists had, for the first time, detected gravitational waves. Albert Einstein had predicted such waves 100 years ago.

The Russian researchers’ instruments used sapphire mirrors. Every little thing mattered in the search for gravitational waves, including the quality (“Q”) of the sapphire used in the mirrors.[1] The Russian researchers claimed to have measured a new, high quality level for their mirrors, something of great interest to those searching for gravitational waves. But, despite their best efforts, researchers at major universities including Caltech, Stanford, Perth, and Glasgow could not match the Russian’s results.

Since it was the Cold War, many were skeptical that the Russians had achieved what they said. As the years passed and no one could repeat the results, the skepticism grew. By 1998, the Cold War was over. Scientists from Glasgow University visited Moscow State University to learn how the Russians had managed to measure the impressively high Q.

After a week, the Glasgow scientists trusted the Russian scientists. With distrust out of the way, the Glasgow scientists focused on what the Russians were doing. It turned out, there was a lot to know beyond what the Journal article said.

Remember, the equipment is very sensitive. Construction and technique play critical roles in the measurement process. This was where the Russians had tacit knowledge. The Glasgow scientists learned how to suspend the sapphire, what to use (a certain silk thread from China worked best), the best length for the suspension thread, the most efficient way to create a vacuum for the test, and many other factors. They also learned patience. The Russian scientist doing the experiments would re-run the same experiment over many days making minute adjustments, before he would accept the results.

Some changes had explanations. But for many, the answer was akin to the famous dictum from Supreme Court Justice Potter Stewart when writing about pornography, “I know it when I see it.”[2] The Russian scientist could not articulate what he need to do, he just knew when he had to adjust the apparatus or run the experiment another time.

AI, Law, and The Tacit Knowledge Risk

As we see the earliest incremental steps of artificial intelligence creeping into law, we should ask whether tacit knowledge plays a role in the legal universe. It is easy to be dismissive and argue no (though I suspect lawyers will try to answer yes). Law is not an “exact” science like physics. The steps that physicists outside of Russia missed when trying to replicate the Q experiments were in many cases matters of omission. Had the Russians given long and detailed explanations of everything they did, the other scientists may have replicated the experiment.

If we push a bit further, the “yes” answer gains currency. Harry Collins has written extensively on tacit knowledge. In Tacit and Explicit Knowledge, the third book of his non-fiction trilogy studying knowledge “top to bottom,” he developed a “Three Phase Model” for tacit knowledge: relational, somatic, and collective. Relational addresses the “contingencies of social life,” somatic the “nature of the human body and brain,” and collective “the nature of human society.” Without delving into the Model, we can see that tacit knowledge includes more than what our senses tell us, it includes much going on around us.

In law, we moved from formalism to realism in the beginning of the 20th century (pragmatism never caught on). What lawyers and judges did involved something beyond formalism. Looking at the facts, reading cases and statutes, and applying the latter to the former was necessary, but not sufficient. The process needed an additional something, and it came from experience, both life and current. Reading the cases or statutes applicable to a set of facts did not give you all you needed to “apply the law.”

The tacit knowledge concept puts a name to what many lawyers try to articulate when they say we need lawyers. Sending a computer to law school, where it learns the theory and rules of law, is not sufficient to give us a practicing lawyer. Even having the computer read all the decisions of all the courts, study the hornbooks, and peruse law review articles falls short. The computer may learn what is in print, but it will not learn the “unknown, unknowns.” It will not learn what the lawyer or judge omitted from the papers. As important, it won’t know what it doesn’t know.

Tacit knowledge plays a role in shaping the biases and heuristics that Daniel Kahneman brought to our attention in behavioral economics. A judge deciding a case employs those biases and heuristics as she applies law to facts. To claim otherwise attempts to argue that judges are not human. But where does this knowledge take us?

Consider tacit knowledge along with artificial intelligence. AI uses machine learning. Imagine we gave AI software all of the cases ever decided involving securities law. We gave the same computer all the law review articles written, all the books published, and any other written thing we could find. The AI used machine learning to scour the materials for patterns. It found things we knew and some “patterns” we didn’t know. But are the new patterns correct? And, what about everything that wasn’t written down?

AI software stumbles when it comes to certain challenges. Law can magnify those challenges. Writing quality varies widely among judges. On a good day, judges may omit essential information from their opinions. On a bad day, they also omit logic. AI will have difficulty inferring what is missing. If 1,000 cases lack the same information, AI may find the pattern. But if only one case lacks the information, AI can’t find a pattern. Another challenge involves deciding what weight to assign each fact. The judge may list 10 facts, but not the importance of each fact to the outcome. Facts change by case, so finding a pattern is difficult.

Think of a decision involving a criminal sentence. Case law requires that Judges list the factors that played a role in sentencing. Most do, but some omit some or all of the factors they considered. The software may see a factor in the case and incorrectly think the judge considered it. The judge may have used her experience to weight recidivism risk factors when deciding what support services the defendant would get, but not listed her experience. Tacit knowledge plays a role in judicial decisions.

When we introduce AI into law, we need to ask what happens to tacit knowledge. If we think of AI as just doing a better job finding things, then we can argue it has little to no impact. AI finds cases faster than a person, but the person still reads and interprets the cases. But how does the AI know which cases to select versus the human? Would a person have selected a case, even though ambiguous, because it gave hints about new directions to pursue?

I am not pretending to answer the tacit knowledge question in this article. But I think we must ask the question as we expand our use of proto-AI and AI technologies. The question may not be what we found, but what we missed.

[1] The quality or Q factor for a material measures the rate at which its resonances decay. Think of a bell. You ring the bell and it takes time for the ringing to subside. The longer it takes, the higher the Q. The scientists wanted “high Q” sapphire and the Russians had measured a Q of 4 x 10 to the 8th.

[2] Jacobellis v. Ohio, 378 U.S. 185 (1964).