Artificial Intelligence agsandrew/Shutterstock.com.

Artificial intelligence (AI) is quickly gaining popularity because it can classify, organize data, and complete all kinds of tasks in faster and cheaper ways. In the legal tech space, AI is typically interchangeable with machine learning, the process by which computers seek and recognize patterns in large datasets to evaluate it. This has applications in the e-discovery, litigation, and transactional spheres.

At Bloomberg Law, when developing AI tools, we begin by trying to identify the problems that our clients are facing. We then explore how our premier access to big datasets—like court opinions, dockets, or EDGAR filings—can be augmented by leveraging AI tools to better understand and analyze what's available.

It's worth noting that the amount of data available is multiplying at an exponential rate making the need for AI solutions more relevant. A 2013 article in Science Daily alleged that 90 percent of the world's data had been generated over the preceding two years. While this sounds impossible to believe at first blush, consider how many e-mails, text messages and social media posts are produced each day. This behavioral shift has implications when exploring the limits of discoverability.

While AI is starting to be discussed more in the legal industry, much still remains unknown or misunderstood—and that has led to fears and myths that are largely unwarranted.

Myth 1: AI Will Make Lawyers Obsolete

While there are plenty of articles warning about “robot lawyers,” the truth is that lawyers' jobs are not in jeopardy from AI technologies. In fact, lawyers should think of AI as a way of empowering them to gain access to big data and use it to make better decisions, create actionable intelligence, and tell better stories. With AI, lawyers are able to spend time doing the things that are more intellectually stimulating and challenging or engaging in more strategic work and business development, and less time bogged down with the tedium of document review. For these reasons, some people say that “AI” should stand for “augmented intelligence” instead of artificial intelligence.

E-discovery has long been the leading edge of embracing technology in the legal space. Today's junior associates can conduct document review aided in large part by technology assisted review (TAR) and predictive coding tools. Not only can documents be threaded, batched, and encrypted, but also they can be searched more efficiently because a computer can learn relevance and consequently identify concepts instead of keywords.

To be fair, certain parts of a lawyer's job—especially the pieces that are more mechanical or rote—will, in time, be outsourced to technology. But there is minimal cause for concern, given that clients are pushing back more and more on paying for things that they would deem to be more manual or less sophisticated, and both the billing model and legal industry are changing. A lawyer who uses technology solutions to curb the time spent doing tactical work can offer more transparency to his or her clients and confidently aver that billable time is spent doing “real lawyer work.”

Myth 2: AI Will Eradicate Human Error

In an ideal world, developers and data scientists would build tools free from all errors. However, no technological solution is perfect, nor is there a machine in existence with the ability to eliminate human error. AI tools can unearth certain details that are undetectable by human eyes and they can process information at a much quicker rate than human beings. The most successful offerings typically include both automated and human elements. For example, the AI tools in development and production at Bloomberg Law include human quality assurance reviews that spot check at various points throughout the process to ensure that the output is as accurate as possible.

Myth 3: AI Will Complicate the Legal Ethics Arena

This myth rings true, as the AI revolution will affect legal ethics in a number of different ways. For example, AI tools require data, so organizations seeking to use those tools will need to affirmatively convert more of their information into digital formats. Confidentiality concerns may follow, as these data will be processed through third party providers and potentially vulnerable to discovery. For example, in order to develop a tool that predicts attorneys' fees, the machine needs a baseline of attorneys' fees—inputted by attorney end-users—to identify trends and to draw conclusions.

There is also a potential ethical question around assigning responsibility if an AI tool makes an error, by missing a key document in review or predicting a litigation outcome that does not come to pass, for example. While lawyers should arguably already be comfortable with the unpredictability of the legal system because even binding law can be interpreted differently by different judges, attorneys will need to make their clients comfortable with the uncertainty of data interpretation.

To date, 28 states have adopted the duty of technology competence set forth by the ABA Model Rules of Professional Conduct, requiring that lawyers “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject [emphasis added].” As AI becomes more ingrained in legal technology and research tools, lawyers will therefore have an ethical obligation to have at least a passing understanding of it.

Myth 4: Only Attorneys with a Technical Background or Big Budget Can Use AI

This is easily debunked, as anyone using tools like Siri or Alexa or Google are already using AI. Similarly, some AI tools are being baked in to the legal providers that firms and companies already use.

Myth 5: AI Is a Magic Bullet that Will Find Everything You Need

In certain ways AI is more of an art than a science, especially when dealing with natural language processing. This is because there are nuances in the way that different people write, whether they be a judge writing an opinion or an attorney who's filing a brief.

Although AI is not a perfect solution, it can, at a minimum, provide directional guidance. For example, Bloomberg Law's Points of Law uses AI and machine learning to get to the heart of a court opinion and pull out all of the important and relevant aspects of what a judge says. This helps legal researchers unearth documents that they could not have found previously and more easily identify similarities between court opinions. Built over five years on 13 million court opinions, this application of AI can minimize the amount of errors or missed documents that a user might face. Still, attorneys' clients will be best served if these tools are supplemented with legal practitioners' expertise.

Darby Green is the Commercial Director for Litigation and Bankruptcy at Bloomberg Law. In this role, she is responsible for product development and go-to-market strategies focused on the business intelligence and legal research needs of litigators, as well as many of Bloomberg Law's artificial intelligence tools. She joined Bloomberg Law in 2009, prior to which she practiced as a commercial litigator in New York. She has an A.B. from Dartmouth College and a J.D. from Vanderbilt University Law School.