Whether it is banks looking to prevent fraud by analyzing the characteristics of a caller's voice, financial institutions looking to authenticate callers through their voiceprint, health insurers looking to improve their customer's online experience, or retailers looking to empower consumers through virtual try-on, the road to defending litigation often starts with the best of intentions. For all these laudable developments with new technologies—including Artificial Intelligence and, at times, biometrics—companies need to be mindful that, despite their best intentions, they may find themselves subject to serious regulatory and litigation risk across the globe if they do not consider key regulatory areas when deploying this technology.

Most troubling, there is a rapidly growing surge of actions filed against some of the world's largest companies, including household names, for alleged consumer harm caused by implementing these advanced capabilities and technologies on their websites and apps without appropriate disclosures and consent. This article highlights how claimants/plaintiffs are using regulations to bring these claims in the U.S., U.K. and EU and discusses ways to mitigate against that risk.