artificial intelligence

Ever since tech mogul Elon Musk told the National Governors Association last summer that artificial intelligence, or AI, must be regulated, lawmakers and lawyers have been focused on what it would actually mean to regulate AI.

Indeed, when searching for statutes that deal with machine learning or artificial intelligence, it's striking how few provisions address the issue. “Artificial intelligence” and “machine learning” appear just five times in the United States Code and just four times in the Code of Federal Regulations. Various states have statutory approaches to artificial intelligence, but there are few, if any, substantive statutes dealing directly with these issues.

This is not, of course, to suggest that there is no regulation of algorithms. In the U.S., certain regulatory bodies have long been thinking about regulating the use of algorithms. Financial regulators, for example, have been overseeing the use of algorithms and automated decisions at financial firms for at least a decade. Spurred in part by issues with faulty modeling assumptions that contributed to the financial crisis in 2008 and 2009, the Federal Reserve Board and Office of the Comptroller of Currency issued SR 11-7, which requires financial institutions to keep track of the internal models they use.

Meanwhile, the European Union, in implementing international capital requirements mandated by Basel III, included provisions similar to SR 11-7 in its regulations. In other areas, the Food and Drug Administration is developing regulations of machine learning algorithms radiologists use to assist in diagnosing diseases.

But of all efforts to address the rising impact of AI, the most wide-reaching statute regulating the use of algorithms is the EU's the General Data Protection Regulation, or GDPR. The GDPR directly regulates the use of algorithms applied to personal data in the EU and will begin to be enforced in May of this year. With fines of up to 4 percent of the violator's parent company's global revenue, the penalties for noncompliance can be quite significant.

While regulators are still working out the details of its implementation, the GDPR appears to create a presumption that applying algorithms to personal data is unlawful, except in certain circumstances. The exceptions are, by design, quite narrow, including the exception that allows for user consent. The regulation also creates several substantive rights, including the right to receive some form of explanation when an algorithm makes a decision with certain effects. Exactly what this explanation must entail is the subject of much debate, as scholars Andrew Selbst and Julia Powles recently noted.

Legislatures in the U.S. appear to be watching the EU approach closely, but not yet willing to place as strict regulation on the books. There's a host of pending legislation at the state or the federal level, for example, and nearly all create a commission or committee to study the issues and provide recommendations to the legislature. The charges of these commissions give a good indication of the range of issues legislatures are concerned about. In Congress, the recent bipartisan FUTURE of Artificial Intelligence Act of 2017 would create a committee to draft recommendations on how AI will impact the workforce, education, accountability to international regulations, and societal psychology, among other subjects.

Meanwhile, bills in Virginia and Pennsylvania direct the study of the economic impact of automating jobs that once required a human. A bill in Vermont requires a study of the ethical use of artificial intelligence. Bills in Alabama and Nevada would authorize the use of autonomous vehicles in certain scenarios. And a proposal in Florida contemplates taxing automated systems.

New York City recently enacted a bill that calls for what is perhaps the most in-depth study of AI, which requires recommendations on issues such as bias that may work their way into algorithms. New York City's committee is charged with ensuring that individuals affected by autonomous decisions made by public bodies can receive further information regarding those decisions, among other tasks.

These are all early stages of a reaction to larger trends—each brought about by the increasing adoption of a grab bag of technologies commonly labelled “AI.” Cars, for example, are starting to drive without human assistance. Cell phones now process speech and perform tasks based on voice commands. In medicine, radiologists are using AI models to diagnose diseases.

At present, there remain a large number of questions about the law of AI, which will surely be the subject of further legislative debate and court review. But the technology is racing forward. It's only a matter of time until laws catch up.

Andrew Burt is chief privacy officer and legal engineer at Immuta, a data management platform for data science. Stuart Shirrell is a legal engineer at Immuta and a J.D. candidate at Yale Law School.