Artificial intelligence, or AI, has grown increasingly popular for its ability to process large sets of data. The term "AI" describes algorithms that can be taught to identify patterns or predict outcomes. If the algorithm is primed with a teaching set of data, then it can evaluate new sets of data based on the desired outcome. AI has been used to process patient data, biometric data, facial recognition data and geolocation data by various industries. However, it has fallen prey to criticism for potentially biased results and alleged invasion of user privacy.

Now, AI industry leaders are applying their technology to new issues raised by the novel coronavirus (COVID-19). Results show that AI can aid in combating COVID-19 and improve our response to future pandemics. However, to reach AI's full potential in a health crisis, access to vast quantities of patient data is necessary. This article explores the benefits and risks of a regulatory framework allowing temporary access to patient data for the purpose of combating a global pandemic.

AI Can Track Disease Spread, Diagnose Patients and Discover New Treatments

The use of AI to combat COVID-19 began on Dec. 31, 2019, when BlueDot, a global intelligence database company, sent out the first COVID-19 warning instructing its customers to avoid Wuhan, China. Using data comprised only of flight itineraries and mass media sources, BlueDot's AI was able to recognize the start of the pandemic without being privy to information from the Chinese government. BlueDot's warning came several days before those from the U.S. Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), neither of which used AI to initially detect the virus.

Since then, several AI algorithms have been implemented to treat COVID-19 through early diagnosis. For instance, U.C. San Diego Health developed an algorithm that uses a deidentified set of chest X-rays depicting patients with various diseases, including COVID-19, to identify patients with COVID-19-induced pneumonia early on. On a broader scale, Qure.ai, an AI firm in Mumbai, India, has retooled its AI-powered X-ray system, called qXR, to detect COVID-19. In a study of 11,000 images, the algorithms determined with 95% accuracy whether the patient had COVID-19 or another illness. Qure.ai is now working with hospitals in the U.K., Italy, Mexico and the United States on an investigative basis.

DarwinAI and the University of Waterloo in Canada have also repurposed their set of algorithms called COVID-Net to diagnose COVID-19 in chest X-rays. Scientists primed the system with 5,941 images taken from 2,839 patients with various lung conditions, including COVID-19. Results show that COVID-Net can detect COVID-19 in images with 88.9% accuracy.

AI has also accelerated COVID-19 drug development. While a vaccine might not reach the market in 2020, researchers have applied algorithms to identify already-existing drugs to treat COVID-19. For instance, BenevolentAI, an AI startup, used its AI platform to identify drugs to mitigate the "cytokine storm," an immune system overreaction to COVID-19 in the body. Cytokine storms cause immune cells to enter regions of the body to attack the virus, resulting in local inflammation that can seriously harm or kill patients. In only three days, the BenevolentAI system identified baricitinib, an FDA and EMA-approved arthritis drug, as a potential treatment. The National Institute for Allergies and Infectious Diseases has since started a large randomized trial in COVID-19 patients.

Other firms, meanwhile, have started developing COVID-19 medications. Argonne National Laboratory in Illinois is applying AI and four supercomputers to develop COVID-19 treatments. Using this combination, researchers reduced a billion potential drug molecules to 30 finalists, and are now evaluating the remaining 30 drug molecules to determine which ones show the most promise for dedicated trials.

In combating COVID-19, AI has shown its potential to increase the speed and accuracy of diagnosis and research. Using AI, scientists have accomplished in mere days what had previously taken months, conserving resources and focusing provider energy to improve treatment outcomes.

Barriers to Harnessing AI's Full Potential

Nonetheless, the benefit AI has provided is arguably a fraction of what could be achieved. AI operates most effectively with access to vast amounts of data. Current data privacy laws, particularly in the United States and the EU, constrain understandings of the virus by limiting access to identifiable patient data.

Though largely unregulated by existing data privacy regimes, the use of deidentified patient data disadvantages AI algorithms from the start. It is limited in quantity and may not be reflective of variations in the patient population. Governments have provided select patient data sets for AI use (the U.S. government, for instance, has supplied only one). AI algorithms must mine these sets to diagnose patients, limiting the algorithms' ability to identify new COVID-19 characteristics present in only some patients.

In the United States, the Health Insurance Portability and Accountability Act (HIPAA) protects mostbut not allpatient health data. HIPAA prevents covered entities, such as hospitals and many health care providers from sharing identifiable patient data without the individual's authorization, unless certain exceptions are met. Covered entities that fail to implement HIPAA-mandated safeguards may face regulatory enforcement, fines, and liability. Though many states have ushered in their own privacy laws, the United States lacks a federal privacy law encompassing both patient data not covered by HIPAA and other types of user data. While these regulations play an important role in protecting patient information, they may impede effective use of AI in global crises by preventing access to crucial data.

The EU's General Data Protection Regulation (GDPR) is currently the world's most stringent regulation protecting citizens' personal and health data. It imposes significant fines for processing personal data without the individual's consent. The GDPR applies to any company collecting and processing data of individuals in the EU, whether within the EU or elsewhere. Its application is complicated by the fact that several of the European data protection authorities have taken inconsistent views on how to apply the GDPR during COVID-19.

While both HIPAA and the GDPR include exceptions for emergency situations (i.e., lifesaving measures, or use of data for the public good), the use of AI in a pandemic has not been addressed. However, COVID-19 could prompt regulators to classify use of data during pandemics as such.

A Unique Regulatory Framework That Allows AI to Combat Global Pandemics

The EU and United States have both proposed AI-specific regulations for commentary, but have not issued binding regulations. COVID-19 can serve as an impetus for countries to:

  • Create specialized privacy laws allowing algorithms to access live patient data for the emergent purposes of tracking the spread of a pandemic and increasing emergency preparedness without violating existing privacy regulations; and
  • Globally allow, in worldwide health emergencies, shared access to patient data to facilitate country preparedness and limit disease spread.

Allowing such access could decrease the spread of a pandemic by alerting countries faster allowing them to limit travel and stockpile supplies, while quickening development of treatments, thereby improving patient outcomes. However, a regulatory system reducing patient privacy protections could also be problematic, particularly ethically. Recent alleged misuses of consumer data by companies has resulted in the largest imposed fines and settlements in U.S. and EU history, reinforcing AI-phobic approaches.

Corporate collaboration with regulators will be crucial, and may lead to novel flexible regulatory approaches. Through collaboration, corporations can advocate for ease of regulation compliance. Patchwork regulation in the EU and United States has proven difficult and resulted in companies implementing policies complying with the most stringent regulations. Likewise, if individual nations implement varying COVID-19 protocols, then compliance will likely mean companies meeting the most stringent standard. This compliance method risks undermining expedited access during future pandemics. Corporations, meanwhile, can assuage concern by adopting guiding principles ensuring appropriate data use.

From a liability perspective, there is a strong argument for immunity from litigation or enforcement for companies using private data in good faith to combat global crises. However, immunity from litigation or enforcement would be a large step in the current legal landscape, where companies are frequently investigated, fined or sued for allegedly mishandling consumer data. Companies in the United States have already warned of the chilling effect litigation has imposed on efforts to combat COVID-19. Regulators in the United States and the U.K. have discussed regulatory leniency toward companies addressing the COVID-19 crisis, but have set no firm rules. Through collaboration, companies and regulators could strike a balance by offering immunity only for good faith pandemic-related data use under specified conditions.

AI has the potential to revolutionize the global response to future pandemics. However, without access to mass, varied patient data, it cannot perform at its peak. Governments, working with companies, could create exceptions to privacy regulations and agree globally to share patient data during crises. If constructed carefully with the potential for liability for data misuse, regulations can provide the data necessary to improve pandemic responses while protecting individuals' privacy.

Mildred Segura is a partner of Reed Smith's life sciences health industry group, in Los Angeles, practicing in the area of complex products liability litigation and is a key member of the firm's artificial intelligence working group. She can be reached at [email protected]

Kimberly Gold is a partner in the firm's IP, tech and data group in New York. Her practice focuses on data privacy, cybersecurity, digital health and transactional matters. She can be reached at [email protected]

Wim Vandenberghe is an EU regulatory partner in the firm's Brussels office, focusing on the life sciences sector. He can be reached at [email protected].

Reed Smith Associates Brian Cadigan and Corinne Fierro contributed to this article.