Over recent months, artificial intelligence has rapidly evolved, with the emergence of various applications that could have a transformative effect on industries and societies across the globe. At the same time, the cyber threat landscape is becoming increasingly complex, raising questions about how AI will impact cybersecurity, including the possible new risks its use could introduce and the potential legal implications that are evolving in response to these developments.

AI can be both a sword and a shield: bad actors increasingly have new methods to effectuate their attacks, while at the same time organizations are able to leverage new defensive capabilities, such as improvements in detecting attacks, creating risk management tools, and identifying suspicious activity. With this in mind, the legal and regulatory standards governing what "reasonable security" looks like will likely evolve just as quickly as developments in the deployment of AI technologies and the changing cyber threat landscape. In this article, we explore the intersection of AI and cybersecurity, including the main risks and best practices for organizations that develop or deploy AI tools, while also considering the future of AI regulation.

AI and Threat Actor Risks

Threat actors are increasingly using AI to generate more frequent, effective and widespread attacks. For example, security experts have warned that threat actors can use generative AI to develop malware, data encryption code, and new dark web marketplaces. Malware powered by AI may become more intelligent and have the ability to search for specific documents or sensitive data within infected machines. Threat actors can also use generative AI to improve the fidelity of phishing emails and other social engineering attacks. Without the misspellings and grammar mistakes, spam filters and victims alike may fail to detect malicious messages more often.