More often than ever, artificial intelligence (AI) is used to generate realistic phishing emails, deploy malware or to create convincing internet content. Additionally, it is likely that AI failures will become more frequent and more severe over time. While existing business and technological options exist to address AI cybersecurity issues and ameliorate the adverse effects of AI cybersecurity difficulties, AI programmer protection requires amending cybersecurity legal defenses.

Security practices for guarding against traditional and AI cyberattacks are similar. Technology and business security measures for guarding against traditional and AI cyberattacks include risk assessment, network defenses, data access and password restrictions, back-up systems, encryption, insurance, and employee training. However, legal defenses against AI cyberattacks differs from traditional cyberattacks because of existing AI programmer immunity.

Currently, if a court finds that there is something a software programmer did to cause cybersecurity difficulties or reasonably should have done to prevent damage by a program, then the programmer may be liable for damages done by the software. In short, conventionally a measure of cybersecurity may be found by making a programmer liable to a damaged cyberattack entity (or even the threat of such an outcome). This recourse is not available in the event of an AI cyberattack.