As AI Transforms Drug Development, FDA Is Scrambling to Figure Out Guardrails
Since 2016, the Food and Drug Administration has received about 300 drug submissions that reference artificial intelligence, a pittance compared with what's on the way.
November 09, 2024 at 06:28 PM
5 minute read
Artificial IntelligenceCounsel to the life sciences industry anticipate that the U.S. Food and Drug Administration by year's end will release new guidance on the use of AI in clinical trials and drug development.
The technology with huge potential to speed development and improve drug efficacy—and to trigger legal headaches—has advanced so rapidly that even the FDA has struggled to get a grip on it.
Last year, the FDA issued separate draft guidance for medical devices that would allow manufacturers in the initial premarket submission of a product to essentially pre-specify future capabilities of a device without resubmitting it later for approval.
AI and machine learning can extract data from electronic health records and other sources and make inferences useful in everything from how a drug may affect certain patients to optimizing dosing.
It can predict adverse effects in certain populations, improve clinical trial recruitment, screen compounds and improve post-market safety surveillance--among many other potentially transformative uses.
So useful has AI been to clinicians that, since 2016, about 300 drug submissions to the FDA referenced AI use in some form, Khair ElZarrad, director of the Office of Medical Policy at the FDA's Center for Drug Evaluation and Research, said during a recent FDA podcast.
The anticipated guidance is likely to address matters such as patient safety and the quality and reliability of data flowing in and out of AI algorithms, said Reed Smith counsel Sarah Thompson Schick, who advises medical products companies.
Another consideration: "Is AI fit for the purposes of what you're doing," added Schick, who also discussed the issues in this recent video.
"How do we ensure these issues are addressed throughout the continuous improvement and training of AI models used in essential research and development activities. And how do we mitigate potential risks around those issues?"
Both FDA and the industry continue to ponder how or to what extent AI should be used in R&D, particularly as the technology advances, Schick said.
Last month, the FDA published a "special communication" in the Journal of the American Medical Association outlining concerns building in the agency over AI use in clinical research, medical product development and clinical care.
Among them: FDA officials see a need for specialized tools that enable more thorough assessment of large language models "in the contexts and settings in which they will be used."
The piece in JAMA also pointed to the potential of AI models to evolve—requiring ongoing AI performance monitoring.
"The agency expresses concern that the recurrent, local assessment of AI throughout its lifecycle is both necessary for the safety and effectiveness of the product over time and that the scale of effort needed to do so could be beyond any current regulatory scheme or the capabilities of the development and clinical communities," Hogan Lovells partner Robert Church and his colleagues wrote in a client note last month.
The FDA also expressed concern of an uneven playing field, where large tech companies have capital and computational resources that startups and academic institutions can't hope to match. The agency noted that the latter may need assistance to ensure AI models are safe and effective.
The agency stressed the importance of ensuring that human clinicians remain involved in understanding how outputs are generated and to advocate for high-quality evidence of benefits.
Troy Tazbaz, director of the FDA's Digital Health Center of Excellence, recently said in a blog post that standards and best practices "for the AI development lifecycle, as well as risk management frameworks" can help mitigate risks.
This includes "approaches to ensure that data suitability, collection and quality match the intent and risk profile of the AI model that is being trained."
ElZarrad listed a number of challenges, some of which may be reflected in the expected guidance.
One is the variability in the quality, size and "representativeness" of data sets for training AI models. "Responsible use of AI demands, truly, that the data used to develop these models are fit for purpose and fit for use. This is our concept we try to highlight and clarify."
He noted that it is often difficult to understand how AI models are developed and arrive at their conclusion. "This may necessitate, or require us, to start thinking of new approaches around transparency."
Potential data privacy issues around AI abound, many of them involving patient data. AI developers must ensure they are in compliance with the Health Insurance Insurance Portability and Accountability Act, better known as HIPAA, as well as a thicket of other federal and state laws. Generally, patient data used is aggregated and de-identified, Schick noted.
While life sciences leaders welcome additional guidance, they are not sitting on their hands until they get it. "I don't think companies are waiting on the FDA, necessarily," Schick added.
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllCLOs Still Jazzed About Gen Al, Even as They Realize Successfully Implementing It Is Harder Than It Looks
2 minute readHow Gen AI Is Changing Legal Work for In-House Counsel
AI Gives Legal Departments New Leverage to Demand Speed, Efficiency From Law Firms
3 minute readLaw Firms, Legal Departments Lean Into Gen AI Adoption to Attract Talent
Trending Stories
- 1In RE: Hair Relaxer Marketing, Sales Practices and Products Liability Litigation
- 2Lowenstein Hires Ex-FTX US General Counsel Ryne Miller to Lead its Commodities and Derivatives Practice
- 32025 Will Be a Turning Point for Crypto Counselor Laura Brookover
- 4Bitcoin, Cryptocurrency Practices Stand to Gain from Trump Election
- 5Judge Leaves Statute of Limitations Question in Injury Crash Suit for a Jury
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250