Cynthia J. Cole, left, and Sarah Phillips, right, of Baker Botts. Cynthia J. Cole, left, and Sarah Phillips, right, of Baker Botts.

Artificial intelligence (AI) has been predicted to profoundly disrupt the legal profession in a very short amount of time. This prediction is picking up steam at the same time there is a maelstrom of shifting regulation, policy developments and client demands around the very gasoline used to fuel the AI engine-data. The hype (and hysteria) around AI would appear to be based on an exclusively rosy picture of AI's technological capabilities and a fundamental misunderstanding of the direction regulatory landscapes are moving, particularly global data privacy laws.

AI is traditionally data hungry: built on data and enormous amounts of data collection and personal information. The loose regulatory controls around data collection in the past have allowed a proliferation of personal information and data to be stored in many different hands. This data is also extremely attractive to “bad actors” who would use it to manipulate elections or promote nefarious activity, not to mention, general hacking and selling the information in order to impersonate financial records or even individuals themselves (i.e., deepfakes).

But the data collection landscape is changing quickly. The European Union's (EU) Global Data Protection Regulation (GDPR) which went into effect on May 25, 2018, has set off a string of global data privacy laws, including the California Consumer Privacy Act (CCPA). The CCPA includes a very specific provision on the selling of data, where Sale is defined as “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer's personal information to another business or a third party for monetary or other valuable consideration.” And valuable consideration, under California contract law, is any benefit conferred or agreed to be conferred (a low bar). Companies that sell the personal information of California individuals will be forced, as of Jan. 1, 2020, to allow those same individuals to opt out of those sales. And even more importantly for companies, they must be able to locate each agreement whereby they allow third parties to sell personal information onward, beyond the strict application of any service provided. This process of identification and renegotiation is no small operational feat for most companies.

Further, boards of directors and senior management are increasingly focused on GDPR, the CCPA and data protection and its risks. Regulatory bodies are heightening their focus on cybersecurity. And with the frequency of high-profile data security incidents, companies feel the need to implement highly specialized data privacy and security policies with an eye toward mitigating enforcement actions and deflating increasing fines.

As regulatory investigations increase and private rights of action bloom (both GDPR and CCPA have private rights of action, though for now, the CCPA's is only for a data breach), differences in protections afforded during enforcement of data privacy laws present significant risks for companies and their legal advisors. Legal privilege protections are complex for US companies in the midst of GDPR implementation and enforcement. Each Member State of the EU has its own rules in relation to privilege and several jurisdictions do not extend the privilege to in-house counsel.  A private right of action for California consumers may allow discovery of companies' prior attempts at data mapping in complying with both GDPR and CCPA and if not done properly, may inadvertently expose data collection and processing methods not intended for public consumption.

There are very real benefits to operationalizing certain legal services and using AI to increase visibility in data-generated tools, but companies should take a balanced approach. Companies should bring in specialists with an understanding of AI and how to protect the sensitive information the machine learning may be culling and an understanding of how the very search terms may be used negatively in an eventual investigation.

The counter-culture of human touch: AI

Legal AI is the use of artificial intelligence technologies, such as natural language processing and machine learning, in relation to legal tasks. There are several potential applications for the utilization of legal AI. Automation of repetitive and routine tasks—such as faster legal research, administrative legal support, and due diligence review—are apt for AI implementation. But it is important to recognize that AI should not entirely replace sound legal counsel and protections.

An AI technique called natural language processing is common among emerging legal technologies. Familiarity with natural language processing may come in the form of Amazon's Alexa and Siri from Apple. These everyday technologies use natural language processing, a type of software that can read 'natural language', i.e., normal text we all use. As the law is in large part constructed from the written word, the power to read, at great speed, legal texts using natural language processing provides a considerable new capacity for lawyers and clients.

For example, natural language processing could be used in assessing contracts to locate key clauses and to determine how the language in those clauses differs from other contracts. Or it can be used to linguistically analyze legal searches to not only return relevant documents, but also provide highly responsive suggestions that answer thousands of types of black-letter-law questions.

Machine learning refers to the ability for algorithms in software to receive input data, use statistical analysis (and some human intervention) to predict outcomes, and update the algorithms or outputs as more data becomes available. In other words, algorithms that progressively improve themselves by feasting on data. The more data consumed, the better the algorithm gets at spotting patterns, including speech and visual. In the context of legal AI, machine learning can analyze contracts and suggest edits based on predefined legal policies.

Legal AI has become a headline issue for law firms and in-house corporate departments. Especially as there are increasing cost pressures on in-house legal departments to decrease the amount spent on external legal fees.

But navigating rapidly evolving data privacy laws requires experts familiar with the company's revenue, operational and legal complexities. And legal AI should only be implemented in these specialty areas with the guidance of experienced counsel. The cost of getting this wrong is only increasing. And the loss of privilege or the public exposure of inaccurate information can be devastating.

The lure of adopting emerging legal AI will remain high, but uncertainty in the current data privacy landscape should be sufficient to remind companies of AI's limits. While companies continue to adapt to GDPR, the EU and the US continue to enact data privacy laws at record speeds. And we have only seen the first of the EU's GDPR enforcement actions. Companies need to be nimble in understanding the risks associated with long-standing revenue models that are now being painstakingly reexamined in an expensive and public way.

The AI world is changing. Data collection, processing and transfer is under heightened scrutiny from multiple jurisdictions and companies have not yet seen the full impact. Companies need to understand the implications of legal AI and its cost, which may be more than originally anticipated. For now, keep your hands in the batter.

Cynthia Cole is special counsel in Baker Botts' Palo Alto office where she specializes in corporate, technology transactions and data privacy. Cynthia is certified as an Information Privacy Professional (CIPP/E) by the International Association of Privacy Professionals. Sarah Phillips is an associate in Baker Botts' Palo Alto office and a member of its corporate, technology and privacy practice groups