Counsel to the life sciences industry anticipate that the U.S. Food and Drug Administration by year's end will release new guidance on the use of AI in clinical trials and drug development.

The technology with huge potential to speed development and improve drug efficacy—and to trigger legal headaches—has advanced so rapidly that even the FDA has struggled to get a grip on it.

Last year, the FDA issued separate draft guidance for medical devices that would allow manufacturers in the initial premarket submission of a product to essentially pre-specify future capabilities of a device without resubmitting it later for approval.

AI and machine learning can extract data from electronic health records and other sources and make inferences useful in everything from how a drug may affect certain patients to optimizing dosing.

It can predict adverse effects in certain populations, improve clinical trial recruitment, screen compounds and improve post-market safety surveillance--among many other potentially transformative uses.

So useful has AI been to clinicians that, since 2016, about 300 drug submissions to the FDA referenced AI use in some form, Khair ElZarrad, director of the Office of Medical Policy at the FDA's Center for Drug Evaluation and Research, said during a recent FDA podcast.

The anticipated guidance is likely to address matters such as patient safety and the quality and reliability of data flowing in and out of AI algorithms, said Reed Smith counsel Sarah Thompson Schick, who advises medical products companies.

Another consideration: "Is AI fit for the purposes of what you're doing," added Schick, who also discussed the issues in this recent video.

"How do we ensure these issues are addressed throughout the continuous improvement and training of AI models used in essential research and development activities. And how do we mitigate potential risks around those issues?"

Both FDA and the industry continue to ponder how or to what extent AI should be used in R&D, particularly as the technology advances, Schick said.

Last month, the FDA published a "special communication" in the Journal of the American Medical Association outlining concerns building in the agency over AI use in clinical research, medical product development and clinical care.

Among them: FDA officials see a need for specialized tools that enable more thorough assessment of large language models "in the contexts and settings in which they will be used."

The piece in JAMA also pointed to the potential of AI models to evolve—requiring ongoing AI performance monitoring.

"The agency expresses concern that the recurrent, local assessment of AI throughout its lifecycle is both necessary for the safety and effectiveness of the product over time and that the scale of effort needed to do so could be beyond any current regulatory scheme or the capabilities of the development and clinical communities," Hogan Lovells partner Robert Church and his colleagues wrote in a client note last month.

The FDA also expressed concern of an uneven playing field, where large tech companies have capital and computational resources that startups and academic institutions can't hope to match. The agency noted that the latter may need assistance to ensure AI models are safe and effective.

The agency stressed the importance of ensuring that human clinicians remain involved in understanding how outputs are generated and to advocate for high-quality evidence of benefits.

Troy Tazbaz, director of the FDA's Digital Health Center of Excellence, recently said in a blog post that standards and best practices "for the AI development lifecycle, as well as risk management frameworks" can help mitigate risks.

This includes "approaches to ensure that data suitability, collection and quality match the intent and risk profile of the AI model that is being trained."

ElZarrad listed a number of challenges, some of which may be reflected in the expected guidance.

One is the variability in the quality, size and "representativeness" of data sets for training AI models. "Responsible use of AI demands, truly, that the data used to develop these models are fit for purpose and fit for use. This is our concept we try to highlight and clarify."

He noted that it is often difficult to understand how AI models are developed and arrive at their conclusion. "This may necessitate, or require us, to start thinking of new approaches around transparency."

Potential data privacy issues around AI abound, many of them involving patient data. AI developers must ensure they are in compliance with the Health Insurance Insurance Portability and Accountability Act, better known as HIPAA, as well as a thicket of other federal and state laws. Generally, patient data used is aggregated and de-identified, Schick noted.

While life sciences leaders welcome additional guidance, they are not sitting on their hands until they get it. "I don't think companies are waiting on the FDA, necessarily," Schick added.