States vs. Vendors: Are Some Risk Assessment Tools Better Than Others?
While some states would never consider a risk assessment tool created by third party, others say developing their own proprietary instruments is unnecessary and, given the resources required, unrealistic. Researchers are split on if state, vendor distinction even matters.
July 14, 2020 at 07:00 AM
12 minute read
A host of criminal justice researchers, universities and private companies support the current market for risk and needs assessment (RNA) and pretrial risk assessment tools. But they're far from the only ones. After all, some U.S. states aren't just consumers of assessment tools—they're developers of them as well. To what extent their dual role is needed, however, is a matter of some debate.
Legaltech News found little consensus around whether state-built proprietary tools are more accurate than ones developed by third parties, or vice versa. While some states with unique populations argue that third-party instruments are unable to meet their specific needs, others say the research and support behind these tools make them viable options. What's more, they add that adopting a third-party tool is often less expensive than developing their own proprietary one.
But while the jury is still out on state versus third-party assessment tools, there's more consensus around the best third-party instrument—or lack thereof. Many researchers say there's little difference between the most widely used third-party tools, so long as they're all properly validated. Yet this doesn't mean these instruments have all hit their potential.
In fact, it's quite the opposite. Some researchers bemoan the fact that tools haven't advanced as much as hoped. But what's holding many assessment instruments back often has more to do with the criminal justice system than the technology itself.
|The Builders
According to Legaltech News's research, at the beginning of 2020, there were at least 17 proprietary assessment tools states either built themselves, or commissioned a developer to build on their behalf, for their exclusive use.
This count of proprietary tools does not include ones created by third parties but customized by individual states, which is common practice, or tools initially built by a third party for one state, but now offered to the broader market. This latter category includes instruments like the Ohio Risk Assessment System (ORAS) or the Virginia Pretrial Risk Assessment Instrument (VPRAI).
Multiple states and jurisdictions use both the ORAS and VPRAI—but you wouldn't know it at first glance. Since both tools are public domain, states are free to change their names. Texas' ORAS deployment, for example, is called Texas Risk Assessment System (TRAS), while Indiana's is the Indiana Risk Assessment System (IRAS). The ORAS implementations in Illinois and Montana are more loosely named the Adult Risk Assessment (ARA) and the Montana Offender Reentry and Risk Assessment (MORRA), respectively. Similarly, a handful of Michigan counties that use the VPRAI tool have renamed it Praxis.
Proprietary and third-party tools also share something beyond just unique names: They're created in similar ways. Developing any assessment tool requires testing predictive factors on a specific population to determine how accurately they predict outcomes like pretrial risk or the risk of recidivism—a process known as validation.
Every assessment tool in use is initially validated (i.e., developed) using a data set corresponding to a specific population. The ORAS, for instance, was based on Ohio's criminal justice data, while VPRAI was based on Virginia data. However, once built, these tools can be validated again on other populations. This allows jurisdictions to calibrate the way a tool scores—for instance, how it weighs certain risk factors— to make it more responsive to their particular demographics.
Why some states and jurisdictions choose to build their own tool is because of the notion that developing and validating an instrument for their specific population will be more accurate than validating one originally built for another population.
"You get better results if you develop your instrument and test it on your own population," said James Austin, president of the JFA institute, a nonprofit criminal justice research agency that's helped a number of states including Arkansas, Colorado and Nevada create their own assessment tools. He explains that by developing proprietary instruments, states "are able to test different things [that third-party tools] didn't even look at, which may, and often does, work better" at predicting risks and/or needs among their specific population.
Alaska, for instance, decided to develop its own pretrial assessment tool because it felt third-party instruments could not account for its unique demographics. The state has a large indigenous population and a small African American community—both of whom make up an inordinate amount of those incarcerated in its prisons. "I wouldn't even know where to begin if [we] were to modify an existing tool that's made for another population," says Barbara Dunham, project attorney for the Alaska Criminal Justice Commission.
In January 2018, the state rolled the Alaska 2-Scale (AK-2S), which was developed by the Alaska Department of Corrections, the Alaska Court System, the Alaska Department of Public Safety and the Boston-based Crime and Justice Institute.
The development of the pretrial risk assessment tool was no small feat. Susanne DiPietro, executive director of the Alaska Judicial Council, says around 50 stakeholders were involved in the process, including judges, prosecutors, defense attorneys, officials from the Department of Public Safety and representatives from Alaska's tribal groups, among others.
DiPietro adds that even after the state was able to collect all pretrial data it needed to review, testing what factors were predictive of pretrial risk took around six to eight months. "So it was an extremely rigorous and inclusive process and that took a lot of time."
|The Buyers
To be sure, not all agree with the notion that state-built tools predict better than their third-party counterparts. David D'Amora, senior policy adviser at the Council of State Governments Justice Center, argues that "one approach really isn't necessarily better than the other. You're not going to see much of a statistical difference in terms of predictive accuracy—this is assuming, by the way, the developed tool has been correctly validated and the tool that is off-the-shelf has been validated for the state within which it's going to be used."
D'Amora adds that instead of accuracy, the larger issue is "that many states don't have the resources to adequately" develop their own tools. "It's expensive [and] it's time consuming."
James Bonta, a consultant for corrections and criminal behavior who has worked with assessment tool developer Multi-Health Systems (MHS), notes that creating and validating RNA tools, for example, requires years of recidivism outcome data. Developing MHS's Level of Service Inventory—Revised (LSI-R) tool, he adds, took around four years.
A proprietary tool, therefore, can be out of reach for many states. "It's something we can't resource as a department," says Randall Bowman, public affairs executive director at the Kansas Department of Corrections. Instead, the Kansas' prison systems went with the LSI-R tool because of the instrument's research and support features. "The research is good, the customer support for that tool is good, the cost is reasonable, [and] the support to do training exists," Bowman explains.
To be sure, it's common for tool developers like MHS to provide additional services, such as training or IT support, at a cost to states and jurisdictions using their instruments. Some, such as the University of Cincinnati, which developed ORAS, also provide validation services for a fee as well.
But at the end of the day, it's the research behind these third-party tools that is often their biggest selling points. Like Kansas, Connecticut also chose LSI-R in 2000 in large part because of peer reviews. "[While] I think [other risk assessment tools] are fairly consistent, there was more research behind the LSI-R at the time," says Gary Roberge, executive director of the Court Support Services Division of the Connecticut Judicial Branch.
There are times, however, when states will turn to a relatively new tool that doesn't have a long track record. In those cases, the states themselves are ones providing the research.
Kentucky, for instance, created a tool called the KY Pretrial Risk Assessment (KPRA) in 2006. But by 2013, instead of revalidating its proprietary tool, it chose to switch to Arnold Ventures' Public Safety Assessment (PSA), a pretrial risk assessment tool that was then still in development. Tara Blair, executive officer for the Kentucky Administrative Office of the Courts, says the state entered into a research agreement with Arnold Ventures in 2013 whereby they handed over Kentucky pretrial data, "and that was part of the creation of the PSA."
From a cost-benefit perspective, the decision was straightforward. Though Kentucky was only one several jurisdictions providing Arnold Ventures with data, the foundation agreed to eventually revalidate the state's PSA implementation, which they did in 2017. "One of the benefits with going with the PSA at that time was, since we were one of the pilot locations, they would provide researchers to revalidate the tool. So we wouldn't have to pay, or get a grant to contract with a researcher to do that," Blair adds.
Resource considerations similarly played a role in Michigan's decision to pilot the PSA as well. Ryan Gamby, management analyst at Michigan Supreme Court, explains that one of advantages of PSA for the state was that, unlike most other pretrial risk assessment tools, it does not require an interview with a defendant.
"A lot of the jurisdictions in Michigan, especially in northern Michigan, are smaller jurisdictions," he says. "They don't necessarily have the resources to conduct an in-depth interview with every single defendant that comes before them for arraignment."
|Similar Problems
At the end of the day, it may not matter what assessment tool a state picks, so long as it's a properly validated instrument. "Despite the bells and whistles and claims to the contrary, I don't think any of the tools do that much better a job in terms of their utility," says Edward Latessa, professor and director of the School of Criminal Justice at the University of Cincinnati, who helped develop the ORAS.
After all, most RNA and pretrial risk tools will calculate their scores using the same type of information. "When you look at the risk factors between most tools, the majority of the risk factors are the same," D'Amora says.
In fact, the RNA tools Legaltech News found in use across the country all looked at many of the same risk factors, including, but not limited to, a defendant's or offender's criminal history, education, employment, substance abuse and criminal attitude. The pretrial risk assessment tools also all looked at a defendant's age of first arrest or conviction, criminal history, and prior instances of failing to appear in court, among other characteristics.
Some researchers caution against straying too far from these common and widely tested predictive factors. "I don't envision jurisdictions doing research that simply finds a factor that is unique to their jurisdictions, [and] I don't encourage that because frankly that unique factor will change, and using those types of things will more likely lead to a tool that loses its predictive value," says Marie VanNostrand, the creator of the VPRAI and the founder of pretrial justice consulting firm Luminosity.
However, beyond their utility, third-party assessment tools differentiate from each other in their scope, design and transparency. Latessa, for instance, notes that ORAS was developed as a set of multiple tools that can be used at specific points in the criminal justice system, such as pretrial, prison intake and parole.
Still, Latessa stresses that ORAS is not "the best tool … I'm not selling anything so I don't know if there's a best one out there."
Bonta has a similar opinion of the LSI-R: "I wouldn't necessarily say the best because there are a lot of other assessment tools that predict [just] as well."
And VanNostrand doesn't think the VPRAI is as good as it gets. "I never imagined that in the year 2020 that my doctoral dissertation, completed in the 1990s, would be a tool that's still in use. I'm not saying it's bad, but here we are decades later with all the advancements that we have in technology and data and research knowledge and data science, [and the fact] that we haven't made enormous improvements is disappointing."
She adds, "I don't think, by any stretch of the imagination … that we've come anywhere near to what I think is possible and should be the standard."
But what's holding back better broader advancements? It is the same problem for RNA tools as it is for pretrial ones: a lack of criminal justice data. "Most jurisdictions cannot tell you the basic performance metrics of their pretrial justice system, what are their release rates, how long does it take a person to secure release, how long is the pretrial period, what are the court appearance rates, the public safety rates, by race, by gender, by neighborhood," she says.
It's a problem, however, that VanNostrand is looking to address head on. "I'm moving … sort of away from risk assessment because until we answer those questions, it's really hard to tell what impact a risk assessment is having, [and] it's really hard to tell how our justice system is performing. And at this stage I find that highly unacceptable."
Tomorrow we'll look at how states validate their tools to ensure they accurately predict criminal justice outcomes—and why what "accuracy" and "risk" means is relative to each jurisdiction.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250