Kelly Trindel is head of industrial organizational science and diversity analytics at pymetrics Inc., where she helps the company proactively test for hidden racial/ethnic and gender biases in assessment tools. The New York-based startup, which received $40 million in venture capital funding last fall, uses games based on cognitive neuroscience and artificial intelligence to help employers find the candidates who best fit their needs, while reducing gender and racial biases. Pymetrics' cloud-based assessment tools are being used by Tesla, LinkedIn and Unilever, among others.

Before joining the company early last year, Trindel, who has a Ph.D. in experimental psychology, was chief analyst and research director at the U.S. Equal Employment Opportunity Commission in Washington, D.C. While there, she provided statistical analytical support to the commission's discrimination investigations and case development for nearly eight years during the Obama administration.


Want to know more about the future of law? Subscribe for our What's Next newsletter.


Trindel spoke with us recently about what lawyers need to know about the use of artificial intelligence and machine-learning in recruitment and hiring. Her remarks were edited for brevity and clarity.

  1. Assessment and hiring tools must comply with federal Uniform Guidelines on Employee Selection Procedures, under Title VII of the Civil Rights Act, which have been around in some form since 1978.My message to labor lawyers is that they are still relevant. Old regulations still matter, and labor lawyers should be asking vendors of tools like pymetrics whether and how they comply with employee selection procedures. If I were still at the EEOC, and investigating the use of a tool of pymetrics or others, I would be looking at the uniform guidelines, which are still being used until there are new guidelines by which to investigate new tools.”
  2. Artificial intelligence offers opportunities to improve fairness and validity of assessment tools. “AI tools give us new ways of de-biasing. Before we go live with a model at pymetrics, we test it with a group of people that we call a bias set, people who have played our games and voluntarily given us their race, ethnicity and gender. In this way, we can see if there is a significant difference in performance by a demographic group prior to going live, and we can see if men outperform women, or whites outperform Hispanics, and if there is a significant difference, and if so, we can see what the predictors are that cause the differences and remove them from the local model,” she said.
  3. Be aware that there seems to be special scrutiny at the EEOC on facial recognition technology, “not only in employment, but in different situations. This technology is under special scrutiny in part currently because of the MIT research finding that study that [found] facial recognition has had trouble with accuracy in detecting minority group member facial expressions, anyone but white males, especially the facial expressions of women of color. A letter was sent to the acting chair of the EEOC from a group of U.S. senators [in September 2018] including Kamala Harris about its perspective on the use of facial recognition technology in employment selection and AI, and it is useful for labor lawyers to know this is a focus. To my knowledge I haven't heard an official response from the commission, this is maybe because the EEOC currently lacks a quorum. But it is something for labor lawyers to be aware of.”

The letter also was signed by Sens. Elizabeth Warren, D-Massachusetts, and Patty Murray, D-Washington. A call to the EEOC for an update on the response to the Harris letter didn't receive a response by deadline. An emailed request to the media office for response to Harris' office also didn't receive a response by deadline.

Read More: