Until courts can appropriately understand how results from the use of artificial intelligence (AI) are generated and the sources relied upon by the AI program, there will be a great reluctance to accept such results as supporting the point sought to be made in court. Recent decisions addressing AI-enhanced evidence and AI-generated results demonstrate that courts want to know what is in the "black box" and, without understanding the "black box," courts are unwilling to accept AI-created results.

|

AI Enhanced Evidence

Seeking to introduce AI evidence requires careful planning and execution as well as the need for an expert to establish its bona fides for admission into evidence. In Washington v. Puloka, No. 21-1-04851-2 (Super. Ct. Kings Co. Wash. Mar. 29, 2024), counsel for defense argued that because source video had low resolution and "motion blur, artificial intelligence was used to increase its resolution, add sharpness and definition, and smooth out the edges of the video images.

The expert conceded that he was not a forensic video technician and admitted that he was not sure whether the Topaz Labs AI program was used in the forensic video analysis community. The expert could not point to any testing, publications or discussions within the group of users he identified that evaluated the reliability of Topaz. The expert testified that Topaz's AI used machine learning to enhance videos based on images in its training library, but "did not know what videos the AI-enhancement models are 'trained' on, did not know whether such models employ 'generative AI' in their algorithms, and agreed that such algorithm are opaque and proprietary."