As clients integrate artificial intelligence (AI) into their business, not only must attorneys advise on issues like data management, privacy and intellectual property, but attorneys should also counsel clients on potential discriminatory biases built into AI algorithms. Understanding these issues can prevent clients from violating EEOC guidelines, Title VII of the Civil Rights Act, the Americans with Disabilities Act and the Equal Pay Act.

AI refers to algorithm tools that simulate human intelligence, mimic human actions and can incorporate self-learning software. You would be hard-pressed to find a sector AI does not affect. Consider its use in robotic surgeries, virtual nursing assistance, predictive customer purchasing, and behavioral marketing, to name a few. While AI optimizes efficiency, studies confirm a real concern of potentially discriminatory impacts.

As companies look to streamline business and optimize their practice using unprecedented efficiencies of AI, some are evaluating the impact of whether their data is ridden with biases. Recently, Amazon scrapped an AI recruiting tool that showed biases against women. Amazon's tool was trained to vet applicants by observing patterns in resumes submitted over a 10-year period. Because most of the resumes were submitted by men, Amazon's AI tool did not select applicants in a gender-neutral way. According to a report in Reuters, Amazon's algorithm was taught to recognize word patterns in resumes, rather than skill sets. Amazon's AI tool taught itself that male candidates were preferable and even penalized resumes that demonstrated membership in women's groups.

Last year, seven democratic senators, including presidential candidates, Kamala Harris and Elizabeth Warren, asked the Equal Employment Opportunity Commission (EEOC) to consider whether facial analysis amplifies biases and violates anti-discrimination laws. The EEOC is responsible for enforcing federal laws that prohibit discrimination in the employment context based on race, religion, sex, age and disability, among others. The senators pointed to AI algorithms that train facial recognition software on predominantly Caucasian faces and disproportionally misidentify African Americans and Latinos. The senators cited to research that showed facial recognition algorithms are 30 times more likely to misidentify darker-skinned women than lighter-skinned men. Additionally, facial analysis technologies use images of a person's face to infer characteristics, such as mannerisms, mood, or health, and may erroneously read cues across racial and gender lines in its comparison to a company's top managers. The senators directed similar letters to the FTC and FBI.

Most recently, these senators requested information from federal agencies in the finance industries on the discriminatory impact of algorithmic decision-making on people of color in the financial services lending sector. EEOC Commissioner Burrows is “especially concerned that inconsistencies or mistakes in the data, such as errors in credit reports or flawed assumptions that are built into algorithms used to analyze data may simply serve to compound discrimination …” According to Burrows, “algorithms used to assess big data are only as good as the assumptions that underlie them.”

So given the potential pitfalls of bias in AI, how should attorneys advise their clients?

Companies must consider the information their AI tech relies on. If the data set the AI tech incorporates is biased, incomplete, or incorrect, the AI tech can as a result incorporate those same biases. According to EEOC Chairman Yang, “if the people designing the algorithms aren't aware of the different possible outcomes by race, gender, age and other groups; the algorithms themselves may get certain biases built in.” Companies should consider rechecking the data reviewed by AI. Some machines can self-correct, but it is critical companies have a good understand of the technology and ask critical questions. After all, companies rely on AI tech to make critical decisions. At this point, AI tech is just not far enough along to be an unchecked decision maker in a company.

Audit, audit, and audit some more. It is vital companies test and monitor their algorithms. The Society for Human Resources Management (SHRM) encourage companies to conduct period reviews by auditing annually to catch biases in AI. SHRM encourages companies to work with an attorney in maneuvering through self-audits. Some highly regulated industries like financial services or those subject to HIPAA build in extra steps to evaluate the data considered relevant and determine whether the data is truly representative. Keep in mind the result of an audit may prompt a company to correct (or adjust) for biased results.

Companies should evaluate their employee policies and incorporate training on the discriminatory impact of AI. This is crucial for personnel engineering AI tech, or downstream relying on AI. Those employees should receive training on bias and understand the limitations of machine learning. AI is here to stay and its use is just another facet companies must build into their internal policies.

AI legislation is currently pending in at least 13 states, and in some states, have materialized into actual laws on the book. By reviewing the current landscape of pending and recently enacted laws, attorneys will have a sense of the priorities and areas of concern, and can appropriately advice clients on imminent areas of legislation. For example, in May of this year, Illinois requires employers who use AI recruiting tools to notify applicants, provide a written description of how the tech works, and obtain consent from each applicant, and must destroy the video 30 days following completing the hiring process. On a federal level, President Trump signed an executive order directing agencies to invest in AI research and development, develop rules, and train workers to gain relevant skills.

Nevertheless, companies should not wait for regulation to direct their paths forward. At the very least, companies could have their values direct their use of AI and consider memorializing it for public view. Google and Microsoft recently published publicly their personal value-based system on their AI use. For example, Google committed to design AI tech using concrete goals of fairness and inclusion, use representative data sets to train and test the tech, check the tech for biases, and analyze its performance.

Ultimately, AI reflects the values of its designers and owners. As companies design integral policies on grievance, privacy, and safety policies, the use and reliance of AI should also be weaved into a company's code of conduct as an important part of the conversation.

In sum, while AI affords companies with unprecedented efficiency, its unchecked use can increase risks. With its cross-sectional impact on areas like privacy, employment, and anti-discrimination laws, companies should communicate their objectives and impose self-audits with the advice of legal counsel. In fact, many law firms have recognized AI as an area worth expanding into. Just in the last year, law firms like Fox Rothschild, DLA Piper, Dentons, Paul Hastings, Littler Mendelson and Jones Day have expanded to incorporate a practice dedicated to AI.

Ciera Logan is an associate at Fox Rothschild. She handles a broad array of litigation matters, including employment and complex commercial litigation. She also advises on recently proposed and adopted legislation in data privacy and security.