The US Food and Drug Administration has announced that it is preparing to regulate AI systems that can update and improve themselves as they gorge on more training data.

The announcement: The agency released a white paper proposing a regulatory framework to decide how medical products that use AI should seek approval before they can go on the market. It is the biggest step the FDA has taken to date toward formalizing oversight of products that use machine learning (ML).

The challenge: Machine-learning systems are tricky to regulate because they can continuously update and improve their performance through new training data. In instances where the FDA has approved ML-based medical software before, it has required the algorithms to be “frozen” before commercial deployment and to go through a reapproval process when they are changed. But the agency recognizes that this process means we lose out on the benefits of the technology. New rules might allow ML systems to evolve to a certain extent after initial approval while the risks of that evolution are managed.

The framework: When seeking approval for a given product, providers of ML medical devices would also have the option of submitting a plan for planned software modifications, including model retraining. Additionally, the FDA would evaluate the quality of the provider’s product development process to ensure, for example, that its system is verified through clinical trials.

Next steps: The FDA is now collecting feedback on the document. The paper “demonstrates careful forethought about the field,” Eric Topol, an expert at the intersection of deep learning and health care, told Stat.

For more on the world of AI, sign up to our Webby-nominated newsletter The Algorithm here.