Artificial intelligence and its use to analyze large, traditionally unwieldy sets of data has revolutionized the ways in which data is used. Privacy and data protection regulations and concerns have risen in tandem. These regulations typically concern any information relating to personal information (e.g., name, address, Social Security number, etc.) and provide a person with numerous rights related to its collection, use and access. For example, prominent data privacy laws such as Europe's General Data Protection Regulation and the California Consumer Privacy Act of 2018 regulate "any information relating to an identified or identifiable natural person" and "information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household," respectively.

As the power of AI increases, the amount and type of data that falls into these categories will increase. Where current AI technology may identify diseases in an MRI scan, future AI technology may fully identify a user. Notably, patient health records have already been at issue in data privacy fines for lax protection. Innovative uses for AI technology may likewise create data privacy compliance issues. For example, the Spanish soccer league, LaLiga, was recently fined for its use of location data and speech recognition technology aimed at preventing piracy.

Furthermore, once data does become subject to data privacy laws, compliance will fall on businesses to accurately map, monitor, and understand how data is collected, used, shared and retained. In order to maintain customer trust relative to AI, comply with the myriad of privacy and data protection regulations, and add value to customers by the collection and use of their data, businesses should consider the following:

Taking an inventory of AI practices

Businesses should understand what, where, and how AI is employed. For example, does the customer service team use a chatbot powered by AI? Does the finance team use AI for fraud detection? Does the security team using AI to predict and manage incidents? Documenting these AI practices, and considering whether to perform Privacy Impact Assessments or other compliance reviews to confirm adequate controls are in place and existing governing policies are met, is the first step to compliance.

Reviewing the Business Strategy for AI

Businesses should also understand the motivation for deploying AI. For example, who are the stakeholders developing the business strategies for AI? How does AI add value to service and product offerings and how is that value communicated to consumers? How does the business's AI strategy align to company culture and public statements made by its executive team and in public-facing statements such as financial disclosures and Privacy Statements? Notably, even tech giant Google, Inc. was fined for lack of transparency with regards to its personalized advertising.

Develop Governance and Policies for AI

With an understanding of what AI is used and the business strategy for using AI, businesses should consider leveraging existing governing policies for privacy, security and confidentiality guidelines. Several data protection principles should also be observed when developing AI: fairness of processing, purpose limitation, data minimization and transparency/right to information. For example, common features of data privacy regulations around the world allow consumers the right to access collected data, understand how the collected data is used, restrict how it is processed and remove it on request. Privacy laws also raise challenges with respect to consent, including the scope of use of personal data for ongoing and future training of algorithms and product development and validation.

Consider Ethics and Bias

Businesses may wish to publish AI or Data Usage Ethics Principles, as many companies (e.g., Microsoft) have done. These principles often go beyond what AI use is legally complaint and impose obligations related to ethics (what is the right thing to do with data and what are the customer expectations).  In addition, policymakers have expressed increasing concerns regarding the implications of AI on issues of fairness, bias and discrimination. Businesses are well advised to confirm their policies and processes address controls related to rooting out bias and discrimination in the algorithm.

While consumers value privacy and security of their personal data, the potential of AI innovation to improve products and services is undeniable. Businesses cannot ignore novel implementations of AI technology due to concerns about data privacy issues, but neither can businesses ignore data privacy issues in the pursuit of AI innovation. Similar to Intellectual Property law, where AI has unsettled traditional IP concepts such as inventorship and protectability, and Competition law, where AI has raised fears about anti-competitive practices such as collusion and parallel pricing, businesses must understand that complying with data privacy laws when introducing a new AI features is as much a part of the AI innovation as the feature itself.  In many ways the next technological revolution will be one based on data, the businesses that can best use that data, and in a legal, private, and secure manner, will be the winners.

Alexandra Ross is director, Global Privacy and Data Security Counsel at Autodesk Inc., a leader in 3D design, engineering and entertainment software. Previously she was senior counsel at Paragon Legal and associate general counsel for Wal-Mart Stores. She is a certified information privacy professional (CIPP/US, CIPP/E, CIPM, CIPT, FIP and PLS) and practices in San Francisco. She holds a law degree from Hastings College of Law and a B.S. in theater from Northwestern University. She is a recipient of the 2019 Bay Area Corporate Counsel Award-Privacy. She also launched The Privacy Guru blog in January of 2014 and has published an ebook Privacy for Humans (available on Amazon and iTunes).

Drew J. Schulte is counsel at Pillsbury Winthrop Shaw Pittman. As an attorney admitted to practice before both the U.S. and European patent offices, he brings a multijurisdictional approach to serving his clients' needs. With a tech background, specifically in intellectual property and data privacy, Drew prides himself on being equipped to handle the full spectrum of his clients' needs—whenever and wherever they may arise.