A Regulatory Framework That Harnesses the Power of AI to Combat Global Pandemics
Artificial intelligence, or AI, has grown increasingly popular for its ability to process large sets of data. The term "AI" describes algorithms that can be taught to identify patterns or predict outcomes.
June 01, 2020 at 01:55 PM
9 minute read
Artificial intelligence, or AI, has grown increasingly popular for its ability to process large sets of data. The term "AI" describes algorithms that can be taught to identify patterns or predict outcomes. If the algorithm is primed with a teaching set of data, then it can evaluate new sets of data based on the desired outcome. AI has been used to process patient data, biometric data, facial recognition data and geolocation data by various industries. However, it has fallen prey to criticism for potentially biased results and alleged invasion of user privacy.
Now, AI industry leaders are applying their technology to new issues raised by the novel coronavirus (COVID-19). Results show that AI can aid in combating COVID-19 and improve our response to future pandemics. However, to reach AI's full potential in a health crisis, access to vast quantities of patient data is necessary. This article explores the benefits and risks of a regulatory framework allowing temporary access to patient data for the purpose of combating a global pandemic.
AI Can Track Disease Spread, Diagnose Patients and Discover New Treatments
The use of AI to combat COVID-19 began on Dec. 31, 2019, when BlueDot, a global intelligence database company, sent out the first COVID-19 warning instructing its customers to avoid Wuhan, China. Using data comprised only of flight itineraries and mass media sources, BlueDot's AI was able to recognize the start of the pandemic without being privy to information from the Chinese government. BlueDot's warning came several days before those from the U.S. Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), neither of which used AI to initially detect the virus.
Since then, several AI algorithms have been implemented to treat COVID-19 through early diagnosis. For instance, U.C. San Diego Health developed an algorithm that uses a deidentified set of chest X-rays depicting patients with various diseases, including COVID-19, to identify patients with COVID-19-induced pneumonia early on. On a broader scale, Qure.ai, an AI firm in Mumbai, India, has retooled its AI-powered X-ray system, called qXR, to detect COVID-19. In a study of 11,000 images, the algorithms determined with 95% accuracy whether the patient had COVID-19 or another illness. Qure.ai is now working with hospitals in the U.K., Italy, Mexico and the United States on an investigative basis.
DarwinAI and the University of Waterloo in Canada have also repurposed their set of algorithms called COVID-Net to diagnose COVID-19 in chest X-rays. Scientists primed the system with 5,941 images taken from 2,839 patients with various lung conditions, including COVID-19. Results show that COVID-Net can detect COVID-19 in images with 88.9% accuracy.
AI has also accelerated COVID-19 drug development. While a vaccine might not reach the market in 2020, researchers have applied algorithms to identify already-existing drugs to treat COVID-19. For instance, BenevolentAI, an AI startup, used its AI platform to identify drugs to mitigate the "cytokine storm," an immune system overreaction to COVID-19 in the body. Cytokine storms cause immune cells to enter regions of the body to attack the virus, resulting in local inflammation that can seriously harm or kill patients. In only three days, the BenevolentAI system identified baricitinib, an FDA and EMA-approved arthritis drug, as a potential treatment. The National Institute for Allergies and Infectious Diseases has since started a large randomized trial in COVID-19 patients.
Other firms, meanwhile, have started developing COVID-19 medications. Argonne National Laboratory in Illinois is applying AI and four supercomputers to develop COVID-19 treatments. Using this combination, researchers reduced a billion potential drug molecules to 30 finalists, and are now evaluating the remaining 30 drug molecules to determine which ones show the most promise for dedicated trials.
In combating COVID-19, AI has shown its potential to increase the speed and accuracy of diagnosis and research. Using AI, scientists have accomplished in mere days what had previously taken months, conserving resources and focusing provider energy to improve treatment outcomes.
Barriers to Harnessing AI's Full Potential
Nonetheless, the benefit AI has provided is arguably a fraction of what could be achieved. AI operates most effectively with access to vast amounts of data. Current data privacy laws, particularly in the United States and the EU, constrain understandings of the virus by limiting access to identifiable patient data.
Though largely unregulated by existing data privacy regimes, the use of deidentified patient data disadvantages AI algorithms from the start. It is limited in quantity and may not be reflective of variations in the patient population. Governments have provided select patient data sets for AI use (the U.S. government, for instance, has supplied only one). AI algorithms must mine these sets to diagnose patients, limiting the algorithms' ability to identify new COVID-19 characteristics present in only some patients.
In the United States, the Health Insurance Portability and Accountability Act (HIPAA) protects most—but not all—patient health data. HIPAA prevents covered entities, such as hospitals and many health care providers from sharing identifiable patient data without the individual's authorization, unless certain exceptions are met. Covered entities that fail to implement HIPAA-mandated safeguards may face regulatory enforcement, fines, and liability. Though many states have ushered in their own privacy laws, the United States lacks a federal privacy law encompassing both patient data not covered by HIPAA and other types of user data. While these regulations play an important role in protecting patient information, they may impede effective use of AI in global crises by preventing access to crucial data.
The EU's General Data Protection Regulation (GDPR) is currently the world's most stringent regulation protecting citizens' personal and health data. It imposes significant fines for processing personal data without the individual's consent. The GDPR applies to any company collecting and processing data of individuals in the EU, whether within the EU or elsewhere. Its application is complicated by the fact that several of the European data protection authorities have taken inconsistent views on how to apply the GDPR during COVID-19.
While both HIPAA and the GDPR include exceptions for emergency situations (i.e., lifesaving measures, or use of data for the public good), the use of AI in a pandemic has not been addressed. However, COVID-19 could prompt regulators to classify use of data during pandemics as such.
A Unique Regulatory Framework That Allows AI to Combat Global Pandemics
The EU and United States have both proposed AI-specific regulations for commentary, but have not issued binding regulations. COVID-19 can serve as an impetus for countries to:
- Create specialized privacy laws allowing algorithms to access live patient data for the emergent purposes of tracking the spread of a pandemic and increasing emergency preparedness without violating existing privacy regulations; and
- Globally allow, in worldwide health emergencies, shared access to patient data to facilitate country preparedness and limit disease spread.
Allowing such access could decrease the spread of a pandemic by alerting countries faster allowing them to limit travel and stockpile supplies, while quickening development of treatments, thereby improving patient outcomes. However, a regulatory system reducing patient privacy protections could also be problematic, particularly ethically. Recent alleged misuses of consumer data by companies has resulted in the largest imposed fines and settlements in U.S. and EU history, reinforcing AI-phobic approaches.
Corporate collaboration with regulators will be crucial, and may lead to novel flexible regulatory approaches. Through collaboration, corporations can advocate for ease of regulation compliance. Patchwork regulation in the EU and United States has proven difficult and resulted in companies implementing policies complying with the most stringent regulations. Likewise, if individual nations implement varying COVID-19 protocols, then compliance will likely mean companies meeting the most stringent standard. This compliance method risks undermining expedited access during future pandemics. Corporations, meanwhile, can assuage concern by adopting guiding principles ensuring appropriate data use.
From a liability perspective, there is a strong argument for immunity from litigation or enforcement for companies using private data in good faith to combat global crises. However, immunity from litigation or enforcement would be a large step in the current legal landscape, where companies are frequently investigated, fined or sued for allegedly mishandling consumer data. Companies in the United States have already warned of the chilling effect litigation has imposed on efforts to combat COVID-19. Regulators in the United States and the U.K. have discussed regulatory leniency toward companies addressing the COVID-19 crisis, but have set no firm rules. Through collaboration, companies and regulators could strike a balance by offering immunity only for good faith pandemic-related data use under specified conditions.
AI has the potential to revolutionize the global response to future pandemics. However, without access to mass, varied patient data, it cannot perform at its peak. Governments, working with companies, could create exceptions to privacy regulations and agree globally to share patient data during crises. If constructed carefully with the potential for liability for data misuse, regulations can provide the data necessary to improve pandemic responses while protecting individuals' privacy.
Mildred Segura is a partner of Reed Smith's life sciences health industry group, in Los Angeles, practicing in the area of complex products liability litigation and is a key member of the firm's artificial intelligence working group. She can be reached at [email protected].
Kimberly Gold is a partner in the firm's IP, tech and data group in New York. Her practice focuses on data privacy, cybersecurity, digital health and transactional matters. She can be reached at [email protected].
Wim Vandenberghe is an EU regulatory partner in the firm's Brussels office, focusing on the life sciences sector. He can be reached at [email protected].
Reed Smith Associates Brian Cadigan and Corinne Fierro contributed to this article.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllA Blueprint for Targeted Enhancements to Corporate Compliance Programs
7 minute readThree Legal Technology Trends That Can Maximize Legal Team Efficiency and Productivity
Corporate Confidentiality Unlocked: Leveraging Common Interest Privilege for Effective Collaboration
11 minute readLaw Firms Mentioned
Trending Stories
- 1Commission Confirms Three of Newsom's Appellate Court Picks
- 2Judge Grants Special Counsel's Motion, Dismisses Criminal Case Against Trump Without Prejudice
- 3GEICO, Travelers to Pay NY $11.3M for Cybersecurity Breaches
- 4'Professional Misconduct': Maryland Supreme Court Disbars 86-Year-Old Attorney
- 5Capital Markets Partners Expect IPO Resurgence During Trump Administration
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250