Computerized Test Interpretations in Custody Litigation
In his column on Matrimonial Practice, Timothy M. Tippins explores some of the evidentiary issues presented when computer-based test interpretations are used by expert witnesses.
September 05, 2018 at 02:45 PM
11 minute read
This column recently discussed some of the controversies surrounding the use of psychological testing in connection with forensic custody reports and testimony. (Tippins, T.M., “Psychological Testing: Controversy and Consensus,” July 18, 2018). It was there noted that many within the community of forensic psychologists—at least among those who concern themselves with such things as reliability and validity—consider the use of computer-based test interpretations (CBTI's) in forensic settings to be improper and perhaps even unethical practice. (James R. Flens & Leslie Drozd, Psychological Testing In Child Custody Evaluations 17 [Routledge; 2005]). Accordingly, it is incumbent upon lawyers confronting a custody evaluation that relies upon a CBTI report to consider mounting a forceful challenge. This article will explore some of the evidentiary issues presented when CBTI's are used and will highlight some instructive recent decisions from the criminal law arena that may provide useful guidance in advancing such a challenge.
|Context: The Interpretive Process
Not all custody evaluators personally score and interpret psychological test data. Psychiatrists and social workers typically lack the requisite training to do so. Even some psychologists who were trained during their doctoral studies to score and interpret psychological tests frequently rely upon commercial services that provide a narrative interpretive report describing various personality traits discerned from the test data. Although these descriptions are intended to be only hypotheses to be considered in the context of all the other data derived during the assessment process, it is not unusual to find that the evaluator has lifted significant portions of text verbatim from the CBTI report and has deposited them into the custody report, often stated in conclusory or even diagnostic terms, sometimes without quotation marks or footnotes to inform the reader that the interpretive statements are the product of a commercial computer program rather than the evaluator's own words. Plagiarism anyone?
|The Issues
Among the various issues that attend the use of such programs, the two that are the focus here are: (1) the hearsay implications; and (2) the reliability problem. In one sense these issues are two sides of the same coin. The purpose of the hearsay rule is to keep out evidence the reliability of which cannot be tested in the crucible of cross-examination. Yet in a different sense, the reliability issue is discrete from the hearsay concern. An expert opinion might not rely upon hearsay yet still be subject to exclusion on reliability grounds because it is based on a principle or procedure that fails the Frye test, i.e., it has not gained general acceptance in the relevant scientific community. (Frye v. United States, 293 F. 1013 [D.C. Cir 1923]). In any event, both issues arise from the fact that when an evaluator relies upon such computer-based reports he or she is operating in the dark and basing conclusions on information that has essentially arisen from the proverbial black-box. This becomes apparent when one compares this computer-reliant practice with the process of personal interpretation.
When a psychologist personally interprets a person's responses to test questions such as those that comprise the MMPI-2 that is often used in custody evaluations, he or she turns to the empirical research reported in the peer-reviewed literature of the psychology discipline as well as various texts and manuals that provide guidance with respect to test interpretation. When done in this fashion, the evaluator can be cross-examined with respect to his or her interpretive methodology and any deficiencies in that process can be revealed. The intellectual sturdiness of the research studies he or she used can be scrutinized. Questions as to how the evaluator applied the research to the interpretation can be asked and answered. Was a given inference drawn because of a specific elevation on Scale 3 of the test or because of an elevation on Scale 6, or did the statement derive because of a combination of those two scales? What level of elevation on those scales did the evaluator find significant? Because the witness personally interpreted the test he or she can explain how that interpretation was produced.
In contrast, when the evaluator relies upon computerized reports, he or she is presented with descriptive statements that were generated by a computer program that is based upon programming algorithms (decision rules). And herein lies the key problem: those algorithms are closely-guarded proprietary secrets that are unknown to the evaluator. The evaluator reading the interpretive report does not know which precise scale scores or combination of scale scores generated each descriptive statement. The evaluator does not know which research studies, if any, provided the basis for each statement. The evaluator does not know which interpretive statements are based upon specific research findings as opposed to those that are solely the product of someone's unproven subjective judgment.
|Evidentiary Analysis
At common law, the Keough rule forbade the expert from relying upon anything other than personal knowledge or facts in evidence, i.e., reliance upon hearsay would render the opinion inadmissible. (People v. Keough, 276 NY 141 [1937]). In 1984, the Court of Appeals, in Hambsch v. New York City Transit Authority (63 NY2d 723, 480 N.Y.S.2d 195 [1984]), modified the Keough rule in civil cases to allow expert reliance on what has come to be called “professionally reliable hearsay.” The court held that “an expert may rely on out-of-court material if 'it is of a kind accepted in the profession as reliable in forming a professional opinion.'” In so holding, however, the court explicitly stated that more was required than a showing that the out-of-court material was of a type upon which the expert's profession customarily relied. The court prescribed a second, distinct, and essential foundational prong, namely that “to qualify for the 'professional reliability' exception, there must be evidence establishing the reliability of the out-of-court material.” (For more detailed discussion, see Tippins, T.M. & DeLuca, L.K., “The Custody Evaluator Meets Hearsay: A Star-Crossed Romance,” JAAML, Vol. 30, No. 2 [2018].)
Quite inexplicably, the courts largely ignored the second-prong reliability requirement of Hambsch until 2002 when the Appellate Division, in Wagman v. Bradshaw (292 A.D.2d 84, 86-87 [Second Dept. 2002]) corrected course and reminded bench and bar that more was required than “customary reliance.” The expert, declared the court, could rely upon “material not in evidence provided the out-of-court material is of the kind accepted in the profession as a basis in forming an opinion and the out of court material is accompanied by evidence establishing its reliability.” (Emphasis supplied.)
Wagman involved the testimony of an expert with respect to plaintiff's injury based not upon his own interpretation of the plaintiff's MRI but rather based upon an interpretive report provided by a non-testifying expert. This the court held to be error due to the absence of independent proof of the reliability of the interpretive report. The parallel with the practice of custody evaluators relying upon CBTI reports is inescapable. While Wagman has been followed in the context of custody litigation with respect to evaluator reliance upon third-party collateral source statements (Straus v. Strauss, 136 A.D.3d 419 [First Dept. 2016]; Lisa W. v. Seine W. (Slip Copy, 9 Misc.3d 1125(A) [Fam.Ct., Kings Co., Olshansky, J., 2005]) review of reported case law suggests the challenge has yet to be made with respect to CBTI reliance. It is a challenge that is very much worth making.
|Analogs in Criminal Practice
People v. Fields (160 A.D.3d 1116 [Third Dept. 2018]) presented issues of admissibility of DNA evidence. The case concerned expert testimony that was based upon DNA analysis “conducted with the TrueAllele Casework System, a computer program that subjects a DNA mixture to statistical modeling techniques to infer what DNA profiles contributed to the mixture and calculate the probability that DNA from a known individual contributed to it.” (Fields at 1118.)
At trial, the prosecution placed the results of the DNA analysis in evidence through the testimony of Mark Perlin, the CEO, as well as the employee who wrote the report. Perlin was “one of two individuals with access to the proprietary source code of TrueAllele, which is the program's 'computer code in [the] original programming language' as written by the software developers.” He confirmed during cross-examination that “the source code had not been disclosed to the State Police or other TrueAllele users.” When defense counsel asked Perlin to produce the source code, the court sustained the prosecution's objection. In affirming, the Appellate Division made the following essential point:
Defendant could have demanded disclosure of the source code to permit an expert review to probe these “possible infirmities in the collection and analysis of data” used against him (citations omitted). He did not do so, nor did he include the source code in his pretrial request that the People instruct Perlin to bring certain documents with him for purposes of cross-examination. He instead raised the issue during his cross-examination of Perlin, during which he established that the source code was secret and that the instructions embodied in it were unknown. Supreme Court drew the line at a question regarding Perlin's willingness to produce the source code itself, a belated and prejudicial request for raw computer code that, absent an expert interpretation that defendant did not indicate was forthcoming, would have been meaningless to the jury.” (Emphasis added.)
Fields at 1120.
The key point: to challenge computer-based interpretations the challenging party must have, and is entitled to obtain, the source code in advance of trial so that it can be analyzed by the party's retained expert to prepare a meaningful challenge to reliability. Thus, the attorney seeking to challenge a custody report that relies upon CBTI should demand that the evaluator produce the source code. He or she will be unable to do so because, as noted, the CBTI vendors keep it secret. A motion should then be made to strike the report that has relied upon evidence derived from the “black box” that the program's proprietor refuses to unlock. One might do this in the form of a Frye application that could include additional challenges to reliability apart from the hearsay objection or in the form of a targeted motion aiming squarely at the Hambsch/Wagman issue.
It should be kept in mind that in the context of a Frye hearing, wherein the proponent of the expert opinion bears the burden of proof, the proponent may endeavor to prove reliability even in the absence of production of the source code. The proponent may attempt to prove reliability by presenting so-called “validation studies” as occurred in one DNA challenge (State v. Wakefield, 47 Misc.3d 850 [Sup.Ct., Schenectady Co., Coccoma, J., 2015]). To the best of this writer's knowledge, there are no legitimate validation studies reported in the peer-reviewed literature or for that matter anywhere else establishing the validity of CBTI programs. David A. Martindale, a prominent forensic psychologist, has observed that what some purveyors of CBTI services tout as “validity studies” are really nothing more than “customer satisfaction surveys.” Such surveys, he states, “are not validity data” that should be taken seriously let alone used to pass Frye muster. Martindale specifically notes:
The data alluded to by some vendors are customer satisfaction data, not validity data. It is reasonable to presume that those who purchase CBTIs believe them to be accurate; thus, those from whom data were gathered constitute an approval-inclined group. Further, since the early 1940s, psychologists have been aware of a survey-distorting dynamic known as non-response bias—the tendency of dissatisfied consumers of a product or service to be unresponsive to requests for their assessments of the product or service. Finally, an appropriately conducted study comparing practitioner descriptions of their patients with CBTI descriptions would require that practitioners create written records of the descriptions prior to viewing the CBTI descriptions, which, I should note for the sake of accuracy, are hypotheses.
Martindale, D.A., personal communication, Sept.1, 2018.
|Conclusion
Family law attorneys confronting adverse forensic opinions that are based in part on CBTI reports need to seriously consider mounting a forceful challenge to admissibility. In People v. Wilson (— N.Y.S.3d —- [2018] 2018 N.Y. Slip Op. 05715), another DNA case involving the TrueAllele system, the Appellate Division reversed a conviction, finding defense counsel's “failure to request a Frye hearing on the TrueAllele Casework system” was “one of those rare instances in which defense counsel's sole failure—in an otherwise proficient representation—constituted ineffective assistance of counsel.” No attorney should want to make it into the advance sheets this way.
Timothy M. Tippins is an adjunct professor at Albany Law School and is on the faculty of the American Academy of Forensic Psychology and on the Affiliate Postdoctoral Forensic Faculty at St. John's University.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllPatent Trolls Come Under Increasing Fire in Federal Courts
Trending Stories
- 1Judicial Ethics Opinion 24-68
- 2Friday Newspaper
- 3Judge Denies Sean Combs Third Bail Bid, Citing Community Safety
- 4Republican FTC Commissioner: 'The Time for Rulemaking by the Biden-Harris FTC Is Over'
- 5NY Appellate Panel Cites Student's Disciplinary History While Sending Negligence Claim Against School District to Trial
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250