Artificial Intelligence 

While there may be a time when AI and privacy can learn to peacefully co-exist in the U.S, it doesn't seem to be on the horizon anytime soon. The "What You Don't Know Will Hurt You: Artificial Intelligence vs. Individual Privacy Rights" session at Legalweek 2020 in New York delved into the tension between the desire to maximize AI benefits and the lingering gaps in the technology that have yet to be addressed by the law.

Most of those problems begin with data, which is both the fuel that makes AI function and a protected commodity under privacy laws such as the EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

However, panelists alluded to the reality that many countries don't always share the same privacy concerns, which makes for an uneven playing field when it comes to technological competition. 

"Can a society that restricts information compete in a big data society with countries that have more data available to them?" asked Gordon Calhoun, a partner and chair of the e-discovery, information governance and compliance practice at Lewis Brisbois Bisgaard & Smith.

Even before the AI component factors into the equation, a big data society by its very nature may be incongruous to a global landscape that is increasingly dominated by privacy laws. Calhoun pointed out how quickly data tends to multiply once it is acquired, copied, stored or passed from hand to hand. While regulations like GDPR may tout a user's right to be forgotten, that may be easier said than done.

"The ability to be left alone, the ability to be forgotten is aspirational," Calhoun said.

AI complicates the problem since it relies on data in order to learn and derive insights, something that many organizations operating within the U.S. have already taken advantage in order to streamline fundamental business practices and monitor productivity. But the data needed to drive those insights may be drying up.

For example, Calhoun pointed to technology like sentiment recognition, which can be used to monitor factors such as an employee's facial expression to infer their overall level of satisfaction on the job. The tool may be particularly useful to tech companies and other industries that have patents and trade secrets to protect and would likely benefit from early warning signs that a worker may be considering jumping ship.

But however useful, such applications may soon find themselves running afoul of the law. After a one-year grace period, the same protections that apply to other kinds of personal data will apply to employee information under the CCPA. Jarno Vanto told Legaltech News last October that some of those tools may have to be tweaked in order to avoid scuffles with the law.

"I think that's going to force companies to rethink how they are using those tools. They might be moving away from more personally identifying stuff to more aggregate tracking," Vanto said.

But the real stumbling block for the continued growth of AI may have more to do with its failures than successes. Panelist Joe Gervais, chief information security officer at Brightside Benefit, indicated that AI might not yet be as smart as people think it to be in terms of deriving context. He alluded to AI-based solutions that an employer could use to screen potential candidates based on their Twitter feed. Tweets in support of a charity with an expletive in its name could be misinterpreted by an AI as indication that a job applicant has problems with anger.

"We're worried about the tyranny of artificial stupidity," Gervais said.

|