The global economy—and the nature of work itself—is transforming as businesses and other organizations adopt increasingly advanced artificial intelligence (AI) systems to perform core analytical, creative, and decision-making functions. While AI technologies carry revolutionary economic potential, they can pose significant risks if not managed properly. As the use of AI systems spreads across industries, regulators are working to keep up with the technology. Congress has yet to promulgate federal laws on foundational issues regarding how organizations manage AI safety, trustworthiness, and accountability. Courts are wrestling over the extent to which basic legal principles that undergird the economy—such as property rights and liability rules—should apply to these technologies. State legislatures and various federal agencies have stepped in to fill the void, creating a patchwork of legal and technical standards that can be difficult to keep up with.

Against this backdrop, the primary burden of managing AI’s risks often falls on corporate officers tasked with designing, implementing, and supervising internal AI governance programs for their organizations.