Welcome to 2019—a time when there seems to be a major corporate "hack" every month and consumer data is the new hot commodity. No one is safe! Whether it is the FaceApp or the Cambridge Analytica scandal, consumer data is becoming increasingly valuable and vulnerable. Aside from obvious privacy concerns, the value of personal information also presents a concern less often discussed: discrimination.

Employers who are paying attention know that it is illegal to discriminate based on an individual's race, gender, religion, national origin, disability or age. For example, it is illegal to post a job opportunity for an individual that is "40 years or younger" or openly seek out or exclude someone of a particular race or gender. In 2014, Ventura Corp. paid $354,250 to settle a lawsuit filed by the Equal Employment Opportunity Commission (EEOC) claiming the company discriminated against men because it had a pattern or practice of refusing to hire qualified male applicants. The EEOC also more recently sued the University of Kansas Medical Center for violation of the Age Discrimination in Employment Act (ADEA) based on allegations from a former employee that the medical center's department head instructed staff to give hiring preference to millennials over older applicants. Just this month, with the help of the American Civil Liberties Union (ACLU), a groundbreaking settlement was reached between the federal agency that operates AmeriCorps and plaintiffs in a class action suit filed against AmeriCorps alleging its screening process violated the Rehabilitation Act because it included a detailed medical questionnaire and restrictive prior health screening guidelines that ultimately result in discrimination against individuals with certain disabilities.

But, what about companies that utilize targeted advertisements for job postings where only people of a certain age, race or gender will see them? Is this practice illegal discrimination, even if it is inadvertent or unintentional? Due to the increasing capabilities of artificial intelligence (AI), a whole new world with respect to litigation related to technology-driven discrimination is emerging. In fact, U.S. Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote a letter to the acting chair and commissioners of the EEOC on Sept. 17, 2018, stating their concern that facial analysis technologies used in the hiring process may perpetuate gender, racial, age and other biases in the workplace. The letter cited an example in which facial analysis technologies incorrectly identified 28 members of Congress as people who have been arrested, and disproportionately identified African American and Latino members of Congress in doing so.  The senators ultimately requested that the EEOC put in place guidelines that explain to employers how facial analysis algorithmic techniques may result in bias and violate anti-discrimination laws.

Whether unintentional or inadvertent discrimination against job applicants is actionable is open for debate. For example, the U.S. Court of Appeals for the Seventh Circuit recently held in Kleber v. CareFusion, that protections under the ADEA only apply to current employees. However, in April, the U.S. District Court for the Northern District of California granted a motion for collective action against PricewaterhouseCoopers for discrimination under the ADEA for allegedly using a recruitment tool for hiring that could only be accessed by those on college campuses or using school-affiliated job sites—allegedly resulting in discrimination against entry-level applicants who are over the age of 40. Specifically, the collective action is open to all individuals who, from Oct. 18, 2013, forward, applied for a covered position in PwC's tax or assurance lines of service, met the minimum qualifications for the position to which they applied, were age 40 or older, and were not hired.

In 2017, ProPublica, along with the New York Times, published an article outing companies like Verizon, Amazon, Goldman Sachs, Target and Facebook for placing ads on social media platforms that target particular demographics, a method called "microtargeting." The New York Times also published an article in March detailing Facebook's recent efforts to prohibit companies from advertising housing, jobs or financial services to certain people based on certain demographics or characteristics. This move came after at least five lawsuits were filed against Facebook claiming it allows companies to target advertisements for jobs, home sales, and credit offers to certain people and not others according to gender, age and zip code.

By way of example, on Dec. 20, 2017, a lawsuit was filed in the U.S. District Court for the Northern District of California by the Communications Workers of America and others "seeking to vindicate the rights of older workers to be free of age discrimination in employment advertising, recruiting and hiring." This suit was filed against T-Mobile, US, Amazon.com, Cox Communications and Cox Media Group for "targeting their employment ads to younger workers via Facebook's ads platform." The complaint detailed the process companies go through after purchasing a Facebook ad—a process which the complaint alleges is a "conduit for age discrimination" because Facebook requires employers or employment agencies to select the population of Facebook users who will receive the ad, including the age range or "ethnic affinities" of those users, which they argue inherently leads to age-and-race-based discrimination in hiring.

While not as blatant as other forms of discrimination, targeted ads can be just as discriminatory and further exacerbate the problem of discrimination in both housing and hiring. Thus, targeted ads present a new area of potential liability for employers who are seeking new talent, but in the wrong way, and are only expected to increase in scope as AI becomes more commonplace. Even if a company is not intentionally discriminating in its hiring, or uses a third-party platform like Facebook to post ads, this does not necessarily excuse them from liability. Under Title VII and the ADEA, an employer can be found liable for employment discrimination under the disparate impact theory of liability where intent is not required and even a facially neutral hiring practice that merely results in discrimination can be problematic.

So what can employers do to ensure they are not part of the problem and limit their exposure to liability? Whether a company hires internally, outsources its hiring through third-party companies, or whether it uses automated hiring practices, there are steps companies can take to make sure that the techniques used will not inadvertently affect certain job seekers. Companies should implement trainings for hiring managers, supervisors and the employees working under them so that they understand federal, state and local anti-discrimination laws that exist where the company operates. A training on the importance of recognizing and preventing implicit bias should also be implemented. In addition, companies should ensure job opportunities are posted in various places where they are accessible by all, regardless of age, gender or zip code. Job postings should clearly articulate the requirements and duties of the job, the skills and qualifications applicants should possess and should include gender-neutral terms. According to a New York Times article, phrases like "top-tier" and "aggressive" tend to attract more male candidates, whereas terms and phrases like "partnerships" and "passion for learning" attract women applicants. Employers should take a close look not only at the individuals being hired, but also at those actually applying and being brought in for interviews to make sure the candidates are diverse. If this is not the case, changes must be made.

When it comes to interviewing candidates, have structured interviews wherein the same predetermined questions are asked to all candidates. Consider the company's values and ask questions that elicit responses that will help determine if a candidate shares similar values that would make them a good fit. If the resources are available, have more than one person interview each candidate. If your company uses an algorithm or some other technology-driven tool to find potential new hires, take the time to understand the tool and the way it works to make sure that it is not perpetuating human biases and narrowing the pool of candidates in unlawful ways, and that job opportunities are visible to a diverse audience and not certain targeted demographics. While the combination of the value of consumer data and AI can potentially pose problems for employers, companies that are conscientious and proactive can take steps today to be a part of the solution.

Fara A. Cohen is an associate at Griesing Law, where she focuses her practice on corporate transactions, commercial litigation and employment law. She can be reached at 215-501-7849 or [email protected].