As artificial intelligence technology becomes increasingly sophisticated and more widely utilized, providers of AI solutions, and the entities that implement such technology in commercial settings, face a myriad of legal and ethical issues, including bias and diversity, privacy, and confidentiality concerns, just to name a few. Although the federal government has not yet passed comprehensive legislation specifically governing AI technology, a patchwork of state and local laws to address such challenges have started to develop. This article discusses these key issues, with examples of legislation, and a potential avenue for federal oversight.

Bias and Diversity

In commercial settings, hiring decisions and performance evaluations are a notable use case for AI technology. Amazon's high-profile attempt several years ago to develop an automated recruiting tool to review and rank job candidates illustrates how the use of such decision tools can impact outcomes. Specifically, the company scrapped the project after realizing "its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry," according to Jeffrey Dastin of Reuters.