In the digital age, it is too often the case that a technology's benefits can be too good to be true. Algorithmic and artificial intelligence-based decision-making programs, for instance, can help schools better ensure academic success and help insurance companies set more accurate policy pricing. But its benefits come at a price, specifically, the steep challenge of trying to understand, limit, and govern the tool, and how it's used.

At the "Algorithmic Malpractice & Lawfare" session of Legalweek 2020 in New York, a panel of legal and tech experts examined this paradox at the heart of today's most cutting-edge innovations.

Jordan Thompson, deputy general counsel and privacy officer at the New York Institute of Technology, noted that his school developed an algorithmic tool to help "identify students who would be likely to drop out and not be successful, and a lot of time that analysis is being done before they are even setting foot on our campus."

The analysis, which uses such data as parental income and social economic status, is only performed on students who have been accepted to the school, and is done to help match students to certain assistance. "The reason we are doing it is we want to retain our students at a higher percentage," Thompson said.

But while the tool helps the school address its students' needs, it's not without risks. Thompson noted that something that "gets us anxious is the misuse of this data. … [With] administering a program to succeed, it's hard when you have a decentralized system such as a university where you have faculty and a bunch of people who work under faculty who are not necessarily listening to [the] administration."

Thompson said that some of that data misuse can include situations where professors stigmatize certain students based on the algorithmic analysis, which could run afoul of the Family Educational Rights and Privacy Act. Thankfully, "we haven't experienced a lot of [that] yet," Thompson said.

Outside of the education space, algorithm-based insurance pricing tools can also be tricky to implement. Stephen Palley, a partner at Anderson Kill, recalled an insurance company he knew that "developed a really good machine-learning predictive algorithm that, using a smaller data set could more accurately price insurance." The problem is "they said we don't actually understand how the software reaches its result, it's a black box," Palley said.

Not knowing how the tool works, however, can render it unusable. Palley noted that some regulators, such as the New York Department of Financial Services, "reserve [the] right to examine and audit" algorithms and require them to be transparent. "The takeaway with this and other developing technology is you can't skirt responsibility by saying you don't understand the software," he added.