A novel legal self-help technique to secure artificial intelligence data and programs is known as Poisoning AI. This technique involves modifying the AI algorithm to intentionally produce specific erroneous results. Poisoning AI may be used to both stop third parties from using AI via the internet, or alternatively identifying cybersecurity difficulties. To ameliorate legal difficulties, associated with this technique appropriate user terms of use agreement notice content should be employed.