If you’ve looked at the legal tech market, you’ve probably run across artificial intelligence (A.I.) as a promised land for legal innovation.  Sure, few companies carry the task out efficiently and effectively (some estimate that organizations lose between 5% and 40% on a given deal due to inefficient contracting). But let’s step back for second – despite the overwhelming hype around A.I. in the legal tech space, legal professionals are starting to realizethere are several real limitations.
Summary of A.I.
Let’s take a moment to demystify what the term “artificial intelligence” means. Artificial intelligence is not necessarily a technology in its own right but is applied in different functions via applications and software. A.I. is a term used for machines that can complete tasks that would have previously required human intelligence.
One branch of A.I. that is of particular interest in the legal field is a technique known as natural language processing (NLP). NLP extends the ability of a machine to understand the human language, along with the sentiments it contains. The software that uses this sort of technology is highly trained and advanced but has the capacity of scanning vast amounts of documents very quickly, a process that might take a lawyer several days or even weeks to complete. NLP technology can scan, retrieve, and rank documents based on the wording and sentiments. However, at the end of the day, it’s all about the quality of the data.
Through machine learning, data programming tools like R and Python have the capability of analyzing years of data quickly and correlating the patterns between cases and their outcomes. Through this work, the technology can predict, with some degree of accuracy, certain legal outcomes.
The Limitations
Problem #1—context
Through all the buzz, it’s vital to remember that artificial intelligence is designed to work alongside humans, not instead of humans, and is far from perfect. Lawyers often need to process a complex set of facts and circumstances, such as economic factors around a deal, commodity prices, leverage, and the best course of action given other circumstances of the status quo.  Although machine learning can build an opinion based on past experience, it can’t accurately consider the issues at hand. There just isn’t enough data.
In other words, it’s simply too difficult for technology to be trained on how to deal with every nuance that comes with the legal profession.
In addition to understanding context, law can be very subjective and relies on creativity, imagination, and innovation. If a required provision needs to be added, a machine would not necessarily know where to add it without human intervention.
Problem #2—data
There is some truth that A.I. is an overhyped buzzword, and much of this comes from focusing on the “shiny” end results of what it has the capacity to do. Something that’s sometimes overlooked is the theory of “quality in, quality out.” In this context, the end result is only as good as what’s initially fed into the technology.
Historically, contracts have come in endless formats, styles, and layouts. To be able to structure all this past documentation into a form that can be used by technology could potentially take dozens of years fora machine to accomplish and understand. Beyond that, there will most likely always be a case for future contracts to be completed in different styles, meaning an out-of-the-box A.I. platform might not be sufficient in a number of cases.
Outside of contract review and research, there are even more complex tasks with which it's thought A.I. may never be able to catch up with humans, a key one being contract negotiation.
In legal terms, A.I. platforms require a vast amount of historic data to start gaining a level of experience and begin learning how to make certain decisions. For some firms, this could involve years of data cleansing to ensure everything is in order before they even begin to look at adopting new technology. For example, imagine your A.I. software is analyzing data based on just 5 years of information. Is that really enough to make a sound and defensible legal decision?
Problem #3—human examination
Even with the best data and context, contracts are complex documents that contain outliers, obligations, and several nuances.  What happens if the algorithm is for NDAs and the contract states, “Jan will buy Chris a car?”  The automation of data analysis and contract negotiation absolutely does not eliminate the need for human examination.  Like any business process, it’s the technology that sits alongside companies and professionals.
Yugoslav diplomat Ivo Andric outlines a number of things that A.I. will need to learn in order to be on par with humans when it comes to diplomacy or negotiation. Until these are part of the technology, it won’t pose a human threat. These include:
- Coming across as self-assured, not arrogant
- Having empathy (not being heartless)
- Being conscientious in everything they do
- Maintaining varied interests in people and their passion without being over zealous
All of these are conscious emotions that are typically complex and difficult for a machine to learn. As long as the machines equipped with artificial intelligence don’t start developing human emotions, they don’t pose a threat.
Artificial intelligence has its place in business, including in the legal profession, but it also has limitations. Using machines equipped with A.I. might reduce the amount of time it takes to do mundane and time-consuming tasks by organizing information and predicting results. However, the limitations present in A.I. mean there will always be a need for humans to do the research and make decisions.