Artificial Intelligence

Last week, the U.S. and 35 other member countries that comprise the Organization for Economic Co-Operation (OECD) agreed to a set of intergovernmental standards pertaining to the use of AI.

Before anyone gets too excited, standards are just that—not rules, per se, but they do lay out some expectations that AI will be used in ways protect human rights, prioritize safety/security and lay some measure of accountability at the feet of the people deploying the technology. And that might be enough to seriously impact the future of AI.

“I think you will see people giving very close attention to these things, whether they are the force of law or not. We have de jure standards and de facto standards all the time, and some de facto standards just become sort of absolutely required. People expect them,” said Stuart Meyer, a partner in the intellectual property group at Fenwick & West.

Indeed, the OECD standards don't mark the first time that the subject of AI regulation has been broached. In the U.S., for example, financial regulators have been long keeping an eye on the use of algorithms and automated decision making tools at financial firms.

Meyer compared the trajectory of AI through the public consciousness to that of privacy: It's always been somewhat important, but has really exploded over the last 10 to 20 years. However, even privacy is weighted differently across global jurisdictions, so individual principals featured in the standards—such as non-discrimination, diversity and fairness—may be differently presented in a potential American regulation than it would in, say, Brazil.

“I think that you'll see as those get implemented in different places, they're going to reflect the norms in those societies,” Meyer said.

Meanwhile, private industry may start benefiting from the presence of a global standard right away. Meyer thinks having a set of criteria to work towards could actually help with the design and implementation of AI products.

For example, the OECD guidelines call for AI actors to ensure traceability in relation data sets, processes and decisions made during the system's lifecycle so that any related outcomes can be analyzed. Companies don't want to invest in developing tools that don't incorporate core tenants like traceability if either a new law or just public sentiment will force them back to the drawing board.

“If you create a jet airplane and then realize it only has one engine and redundancy really requires two engines, you can't just tape another engine onto the prototype. You have to start all over,” Meyer said.

Still, it's possible to go too far in the other direction. A whitepaper released by the U.S. House of Representatives' Oversight and Government Reform Subcommittee on Information Technology last October alluded to the dangers over-regulating can pose to AI innovations.

Robert Silvers, a co-chair of the artificial intelligence practice at Paul Hastings, doesn't think that the standards pose that risk.

“I think that this is high-level enough where it's not going to stifle innovation,” Silvers said.

As for companies who are just looking for a better grasp on how to best implement AI into their day-to-day work, the standards could provide some relief for them as well. Silvers said he has spoken with a number of general counsel and chief compliance officers who understand the risks that come with AI development, but aren't sure how they should organize their efforts to mitigate those risks.

“I think that will actually give compliance officers and legal teams the courage and confidence to do their work,” Silvers said.