New guidelines come on the heels of recommendations covering post-market modifications to AI products, including those incorporated into systems used by clinical laboratories
Artificial intelligence (AI) is booming in healthcare, and as the technology finds its way into more medical devices and clinical laboratory diagnostic test technologies the US Food and Drug Administration (FDA) has stepped up its efforts to provide regulatory guidance for developers of these products. This guidance will have an impact on the development of new lab test technology that uses AI going forward.
In December, the FDA issued finalized recommendations for submitting information about planned modifications to AI-enabled healthcare products. Then, in January, the federal agency issued draft guidance that covers product management and marketing submission more broadly. It is seeking public comments on the latter document through April 7.
“The FDA has authorized more than 1,000 AI-enabled devices through established premarket pathways,” said Troy Tazbaz, director of the Digital Health Center of Excellence at the FDA’s Center for Devices and Radiological Health, in a press release announcing the draft guidance.
This guidance “would be the first to provide total product life cycle recommendations for AI-enabled devices, tying together all design, development, maintenance and documentation recommendations, if and when finalized,” Healthcare IT News reported.
The guidance was published in the Federal Register last month titled, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.”
![](https://www.darkdaily.com/wp-content/uploads/troytaz.jpg)
“Today’s draft guidance brings together relevant information for developers, shares learnings from authorized AI-enabled devices, and provides a first point-of-reference for specific recommendations that apply to these devices, from the earliest stages of development through the device’s entire life cycle,” said Troy Tazbaz (above), director of the Digital Health Center of Excellence at the FDA Center for Devices and Radiological Health, in a press release. The new guidance will likely affect the development of new clinical laboratory diagnostic technologies that use AI. (Photo copyright: LinkedIn.)
Engaging with FDA
One key takeaway from the guidance is that manufacturers “should engage with the FDA early to ensure that the testing to support the marketing submission for an AI-enabled device reflects the agency’s total product lifecycle, risk-based approach,” states an analysis from consulting firm Orrick, Herrington and Sutcliffe LLP.
Another key point is transparency, Orrick noted. For example, manufacturers should be prepared to offer details about the inputs and outputs of their AI models and demonstrate “how AI helps achieve a device’s intended use.”
Manufacturers should also take steps to avoid bias in data collection for these models. For example, they should gather evidence to determine “whether a device benefits all relevant demographic groups similarly to help ensure that such devices are safe and effective for their intended use,” Orrick said.
New Framework for AI in Drug Development
On the same day that FDA announced the device guidelines, the agency also proposed a framework for regulating use of AI models in developing drugs and biologics.
“AI can be used in various ways to produce data or information regarding the safety, effectiveness, or quality of a drug or biological product,” the federal agency stated in a press release. “For example, AI approaches can be used to predict patient outcomes, improve understanding of predictors of disease progression and process, and analyze large datasets.”
The press release noted that this is the first time the agency has proposed guidance on use of AI in drug development.
The new framework will address what the agency sees as challenges unique to AI, according to a blog post from Sterne, Kessler, Goldstein and Fox P.L.L.C.
These include “bias and reliability problems due to variability in the quality, size, and representativeness of training datasets; the black-box nature of AI models in their development and decision-making; the difficulty of ascertaining the accuracy of a model’s output; and the dangers of data drift and a model’s performance changing over time or across environments. Any of these factors, in FDA’s thinking, could negatively impact the reliability and relevancy of the data sponsors provide FDA.”
Here, too, the deadline for submitting comments is April 7, according to a notice published in the Federal Register titled, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.”
FDA Teams with VA on AI Virtual Lab
The FDA also plans to participate in direct testing of AI-enabled healthcare tools. In October, the FDA and the Department of Veterans Affairs (VA) announced that they will launch “a joint health AI lab to evaluate promising emerging technologies,” according to Nextgov/FCW.
VA Undersecretary for Health Shereef Elnahal, MD, announced the venture during the Veterans Health Administration Innovation Experience conference, held Oct. 29-30, 2024, in Chicago.
Elnahal said the facility will allow federal agencies and private entities “to test applications of AI in a virtual lab environment.” The goal is to ensure that the tools are safe and effective while adhering to “trustworthy AI principles,” he said.
“It’s essentially a place where you get rapid but effective evaluation—from FDA’s standpoint and from VA’s standpoint—on a potential new application of generative AI to, number one, make sure it works,” he told Nextgov/FCW.
He added that the lab will be set up with safeguards to ensure that the technologies can be tested safely.
“As long as they go through the right security protocols, we’d essentially be inviting parties to test their technology with a fenced off set of VA data that doesn’t have any risk of contagion into our actual live systems, but it’s still informative and simulated,” he told Nextgov/FCW.
There has been an explosion in the use of AI, machine learning, deep learning, and natural language processing in clinical laboratory diagnostic technologies. This is equally true of anatomic pathology, where AI-powered image analysis solutions are coming to market. That two federal agencies are motivated to establish guidelines on working relationships for evaluating the development and use of AI in healthcare settings tells you where the industry is headed.
—Stephen Beale
Related Information:
AI-Enabled Device Software Functions: FDA’s Final Guidance for Predetermined Change Control Plans
FDA Issues Draft Guidance on Predetermined Change Control Plans for Medical Devices
Streamlining Device Changes with Predetermined Change Control Plans (PCCPs)
FDA Offers New Draft Guidance to Developers of AI-Enabled Medical Devices
FDA Finalizes AI-Enabled Medical Device Life Cycle Plan Guidance
FDA Issues Draft Guidance on AI-Enabled Medical Devices
FDA to Hopeful Marketers of AI-Equipped Medical Devices: Think Beyond Your Initial Approval
FDA Issues Final Guidance on Post-Market Updates to AI-Enabled Devices