Welcome back for another week of What's Next, where we report on the intersection of law and technology. This week, the Partnership on AI's Peter Eckersley warns us about the use of half-baked risk assessments in court that use artificial intelligence. Plus, Singapore takes a cue from our commander in chief and goes after fake news, or rather “online falsehoods,” a significantly less catchy turn of phrase. And the U.S. Department of the Treasury really wants cryptocurrency businesses to take it seriously.

Peter Eckersley of Partnership on AI.
|

Experts Object to AI in the Courtroom

Last August, California courts began ditching cash bail for a statistical risk assessment tool. When Trump approved the First Step Act four months later, the bipartisan prison reform gave the attorney general a seven-month deadline to crank out risk assessment software predicting inmate recidivism. Since then, jurisdictions around the country have been integrating artificially intelligent risk assessment tools in courts. This wave of reliance on algorithms in criminal justice has set many AI specialists on edge. Last month, the Partnership on AI, a 50-member group of tech companies, academics and nonprofits working toward best practices in the space, released research laying out what needs to happen before these predictive tools find themselves in the courtroom.

We caught up with Peter Eckersley, the organization's director of research, to find out why big data might not be the answer to criminal justice reform.

►►What are the biggest concerns about validity, accuracy and bias of the tools?  As a basic matter, it's really hard to predict what people are going to do in the future. The most accurate predictions you can get for recidivism are in the low 60% range. There's significant potential in justice in making high-stakes decisions about people's futures based on very inaccurate statistical models. Perhaps more troublingly, we also know those models have huge biases trained into them during the use of machine learning, and that leads to a demonstrated, significant disparity in the effects on defendants of different races.

Across our partnership there were a significant number of organizations that were deeply skeptical that risk assessment tools could ever be appropriate for decisions to detain, and there were other voices who felt that you could conceivably make AI tools that were accurate enough and unbiased enough for this kind of decision making. The way we tried to unify those views was to identify shared requirements, where everyone agreed the way the tools were falling short. And the list of requirements ended up being quite long and daunting.

►►Is there a way to create a valid risk assessment model with the data we have on recidivism? We know that different populations in the U.S. are stopped, searched, charged and convicted at different rates. The disparities are enormously varied for different types of crimes in different locations and under different counties' law enforcement practices. Obtaining fair statistics about the true probability of a defendant reoffending in a particular place would require not only having AI tools be able to look into an individual's mind and future, but also being able to disentangle all these complex sampling biases in the very limited datasets that are available.

Some police departments do a great job of keeping records about arrests and convictions, others have very poor record keeping, and from time to time, you hear about data being completely fabricated. The notion that America's courts might incarcerate people based on predictions from that data is extremely concerning.

Another misconception is that machine learning predictions are only biased if the data they're trained on is biased. It turns out there are other profound sources of bias, for instance when the datasets you try to make predictions from don't contain the true causes of the outcome.

►►What needs to be done to get close to deploying these tools?  Jurisdictions deploying risk assessment tools need to take responsibility for their consequences and standard setting around these tools. We identified 10 areas where our partners in the AI community could articulate major requirements. Initially, we believe no current tools would pass our standard setting process. It's also really important to be extremely cautious of AI in such high-risk settings where we can be more or less certain that things are going to go very wrong. For that reason, many of our partners were strongly of the view that while it could be acceptable to release people from jail based on a risk score, you should never be detaining people based on a high-risk score. Those decisions to detain should always be made by humans in individualized hearings. —Alaina Lancaster


|

Dose of Dystopia: Singapore's 'Ministry of Truth'

Is it an “Orwellian” curb on free speech, or a necessary check when fake social media postings can alter a country's political landscape and lead to gruesome mob violence? That's been the crux of the public debate over Singapore's new lawoutlawing fake news. The “Protection from Online Falsehoods and Manipulation” bill, approved last week, seems mainly targeted at online platforms like Facebook and YouTube.

So who gets to determine what's true and false, you ask? Well, that one should be easy: the government, of course! As Bloomberg explains, there will be an appeals process through the courts (which Singaporean officials stress will be quick and inexpensive), but postings will have to come down first while the matter is litigated.

How broadly the law would be enforced is another open question, though I'm a little skeptical that Singapore will be going after Twitter to clean up the 10,000+ false or misleading tweets that President Trump has reportedly made. The law also would apparently apply to chat and direct messaging applications … so no more lying to chat buddies, I guess? —Ben Hancock


|

Treasury Tips Its Hand on Cryptocurrency Enforcement

Sigal Mandelker, U.S. Department of the Treasury's undersecretary for terrorism and financial intelligence, warned on Monday that the department is serious about cracking down on digital currency businesses that don't comply with trade sanctions, anti-money laundering and bank secrecy laws, saying “bad actors are trying to leverage virtual currencies to make an end-run around our laws and regulations.”

James Gatto, blockchain and digital currency team leader at Sheppard Mullin in Washington, D.C., said digital currency businesses better pay attention to that message, because issuing guidances and making public remarks is often a precursor to government enforcement actions. “Typically I would not be surprised to see enforcements come after this activity,” Gatto said.

Of Mandelker's speech and recent guidances issued by the Treasury Department, including the Office of Foreign Asset Control, Gatto said: “When you see all that activity, it is a message. We always say you have to be in compliance in the first place, but if you think that there hasn't been enforcement or [that] other people are doing it and take solace in that, you are probably sadly mistaken.”

Treasury Department's Financial Crimes Enforcement Network estimates $1.5 billion in funds were stolen in just the past two years from hacking attacks on virtual currency exchangers and administrators with the intent to generate revenues for bad actors, and 47,000 suspicious activities reports have been filed mentioning virtual currency, Mandelker said in her speech.

Mandelker's remarks came during the CoinDesk Consensus, an annual conference of companies, investors and educators in blockchain technology and cryptocurrency in New York City, where Mandelker stressed that the Treasury Department considers enforcement of banking and trade sanctions regulations an important component of protecting national security. Mandelker served as deputy assistant attorney general in the criminal division of the U.S. Department of Justice from 2006 to 2009, and was a partner at Proskauer Rose before assuming her current post in June 2017. —MP McQueen

 

On the Radar

►►Tech Execs Could Have to Pay Up  After major privacy blunders from the likes of Facebook and other U.S. tech companies, the Zuckerbergs of the world might have to pony up. Federal Trade Commission officials told Congress that executives could face individual fines for cybersecurity and privacy breaches to hold deep-pocketed corporations accountable. Read more from Caroline Spiezio.

►►Lawyers to Join the Gig Economy Littler Mendelson has launched an app to help answer client's day-to-day questions without having to beef up their legal department. The app uses technology and on-demand legal expertise to offer real-time insights. The firm says the cost of the app's features will be less than hiring new in-house counsel. Read more from Dan Packel.

►►The British are Coming for IOT Dominance U.K. Digital Minister Margaret James announced plans to legislate cybersecurity standards for IOT devices. The proposed regulations include passwords that would reset items to factory settings and guidance around security updates. These upcoming rules could set the bar for IOT laws in the U.S. Read more from Frank Ready.