Littler Mendelson is the largest labor and employment law firm in the world. It counts more than 1,000 lawyers in 70 offices, domestically and internationally, with particularly strong outposts in Mexico, Canada, Venezuela, and Germany.

Veteran class action litigator Garry Mathiason, a short time ago, traded his management role as firm chairman for a new challenge, co-chairing the firm's emergent Robotics, Artificial Intelligence (AI), and Automation practice group. Mathiason recently sat down in his San Francisco office with ALM Director of Intelligence Dirk Olin to discuss the new group as well as the brave new world of the law machine.

ALM Intelligence: Let's start with some definitions. When you refer to robotics, AI, and automation, what do those terms mean to your firm and your clients? Also, your website refers to this as an Industry specialization, as opposed to a Practice group — is that a significant difference in taxonomy?

Mathiason: We call our group a practice group, because we are identifying practical ways that transformative 21st century technology can enter the workplace and remain compliant with workplace laws enacted long before this technology existed.

We are also in an industry group that will have gross revenue of $158 billion in 2020 and projected revenue, by 2025, in excess of $1 trillion. We are dedicated to learning as much as possible about employers in this industry as robotics and AI promise to become the largest industry in the world by 2030, exceeding traditional manufacturing, transportation, and hospitality combined. Being familiar with the business is essential to being able to advise clients.

ALM Intelligence: How does AI currently apply — in terms of potential liability issues or commercial opportunities within the practice of labor and employment law? Do you see those applications evolving in the near future, and if so, how?

Mathiason: “Narrow” AI is already here. One example is ROSS and the use of IBM's Watson technology for legal research and memo writing. Soon, this AI will allow an analysis of legal cases and questions much like is now occurring in medicine.

Meanwhile, predictive analytics and big data are used today in many industries. This presents questions of privacy, discrimination, displacement, and the entire range of workplace laws.

ALM Intelligence: How do you determine liability if an AI application causes some type of harm? What are some of the main loci of liability?

Mathiason: This is far more complex than it appears on the surface. The simple answer is to place liability on the producer of the AI. Yet, AI does not operate in a vacuum — there are potentially several organizations that work with the software — often modifying the original programs. There are integrators, end users, and subcontractors. With machine learning, the AI may have made modifications that are unknown or beyond the knowledge of the original programmers. There is a difference between a closed system coming from a producer and open systems that are easily modified and supplemented.

ALM Intelligence: How should government approach AI regulation? Should it come from the states, the feds? What are the cross-border issues? Is and can government keep up with the rate of technological change?

Mathiason: This is a good question. Government cannot keep up with the rate of technological change. Regulators should avoid acting too quickly or without substantial review of the consequences of the regulations. This is a classic area for unintended consequences. Also, this technology is borderless. Banning or over-regulating AI merely causes it to move to a different, more accepting government. When a clear public safety issue is confronted, then regulations should be explored. A classic example is the act of drones flying too close to an airport. Yet drone regulations, such as those finalized by the FAA, often go further than necessary, slowing down implementation in some states or countries, and development subsequently moves to a different jurisdiction.

ALM Intelligence: Beyond issues of compliance, what, if any, are the ethical implications of AI that keep you up at night?

Mathiason: Ethical issues and implications are extremely important. The self-driving car is now programmed to protect the driver and passengers — yet what are the ethical implications of a car killing several pedestrians when the technology had maneuvered the car out of a collision with another car? Should AI-enabled robots be able to kill without a human decision maker? Currently, military drones and combat robots still are under human control. A different type of ethical issue is whether to use AI to provide legal answers and representation to individuals who could not afford an attorney. There are some essays on this issue coming from public defenders.

ALM Intelligence: Globally, which countries are leading the way in terms of these technologies and their oversight?

Mathiason: Japan, Germany, South Korea, United States, and China are the leaders in disruptive technologies. Clearly, the most oversight is in Europe through the EU. There is an extensive study commissioned by the EU Parliament that provides a roadmap for oversight — or limiting the oversight.

ALM Intelligence: You've been watching the AI space for years — what were the recent inflection points that caused you to launch this practice? Any key metrics?

Mathiason: I have followed robots and their programming for several decades. The turning point was 2010, when exponential programs took place in sensors, cloud computing, and machine learning. This was made possible by a great reduction in price and the universal use of cell phones. The iPhone has over 40 programming and component breakthroughs. Many of these components are manufactured in the millions, bringing the per-unit cost to pennies instead of hundreds of thousands of dollars. The excess components are often available on the over-the-counter market. There are also some major breakthroughs in programming that can be discussed separately.

ALM Intelligence: As a practical matter, what's the difference between handling these matters ad hoc versus through a discrete new group?

Mathiason: The group approach is far superior to ad hoc or individual efforts. We are able to create solutions that did not previously exist. Littler's workplace violence prevention practice group brought law together with management, security, and threat assessment psychology. The multidisciplinary recommendations and programs have been adopted throughout the U.S., by the federal government, and at least 14 other countries. This was addressing a serious client problem, not only with existing law, but entirely new approaches that changed the law. Several more examples can be made.

ALM Intelligence: In the next two years, how much of your AI practice will involve existing clients versus new business? How about in five years?

Mathiason: During the next two years, we will represent mostly our existing clients as users of robotics and AI. In five to 10 years, we will have most of our business from companies that are only coming into existence now.

ALM Intelligence: Are the attributes of a lawyer practicing in the AI and robotics group the same or different from lawyers generally? What is the required level of technical sophistication and training?

Mathiason: Littler attorneys are not practicing engineers or scientists. We can and have learned about new technologies, often before they are introduced to the public. This does not require a technical background. However, there is more interest expressed by attorneys with zero-to-five years of experience than those with 20- plus years. They are very open to learning about technology and what can be accomplished. Once we know about the capabilities of technology, we are thought leaders on how it can be most effectively used while complying with current workplace laws. If new legislation or regulations are needed or wrongly being advanced, we can educate governmental leaders through Littler's Workplace Policy Group.

ALM Intelligence: What does success look like? How quickly are new practice groups expected to be profitable?

Mathiason: With a new practice group focused on robotics and AI, it is often hard to differentiate between success and failure. What we stay focused on is the size and impact of this industry over a short time. If we can see how Littler can help make the introduction of disruptive technology into the workplace more compliant with existing workplace laws, we can provide a great benefit to the industry and our clients. Some of the efforts will not succeed — and we learn from their failure.

One of Littler's strongest attributes is a willingness to provide resources that will not become immediately profitable. The consequence is that looking back at the firm 40, 30, 20, 10 and even five years, we have moved from a handful of attorneys when I joined the firm in its single San Francisco office, to offices and attorneys worldwide. While becoming the world's largest employment and labor law firm is due to many factors, one has been our willingness to innovate and find solutions that depart from the 100-year-old traditional practice of law. Today, Littler is a leader in law firm technology and is aggressively expanding its reach throughout the world. We are careful in where we place our resources, and we are not traumatized by the potential for failure. We firmly believe the best days for Littler are in the future!

ALM Intelligence: How much of lawyering itself can be machine-learned? Will Google ever compete against Big Law?

Mathiason: Machine learning will greatly improve the quality and efficiency of the practice of law. Many of the more mechanical tasks will quickly be done by technology, and over five-to-10 years, technology will perform increasingly complex tasks. AI and machine learning will bring the law to billions of people worldwide who otherwise could not afford an attorney or exercise their rights. I predict that machine learning will not lead to Google challenging Big Law. Rather, Google will create systems and machine learning solutions that will become integrated into the fabric of Big Law. For many decades, Big Law will flourish, but it will increasingly require technology to be successful in efficiently providing quality representation — and the number of attorneys will decline.