Last year, President Joe Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Since the issuance of the executive order, a lot of attention has been focused on the provision requiring “the head of each agency with relevant regulatory authority over critical infrastructure … to assess potential risks related to the use of AI in critical infrastructure sectors involved … and to consider ways to mitigate these vulnerabilities.” See Exec. Order No. 14110 Section 4.3(i), 88 C.F.R. 75,191, 75,199 (Nov. 1, 2023). Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy (DOE) highlights.

While many sectors could benefit from the offensive adoption of AI, the energy sector stands out for ripe opportunities to tap AI. In addition to cataloguing risks, DOE provided road maps for future regulations on the offensive adoption of AI by showing how artificial intelligence can reduce threats to critical infrastructure and limit the disruptions to operations that commonly spark litigation.