By Kirby Ammons
Playing Catch-Up: FDA Regulation of AI/ML Clinical Decision Support Software
By Kirby Ammons
With the significant growth in technological capabilities of Artificial Intelligence (AI) and Machine Learning (ML), there are increased calls for government regulation to ensure safe implementation across numerous industries. The FDA’s kludgy attempts to design a workable regulatory framework for AI/ML used by Health Care Providers (HCPs) to make decisions in healthcare over the last three decades are coming to a conclusion. However, the FDA’s current approach as published in draft guidance documents fails to expand upon statutory language, leaving significant vagueness regarding what is required of AI/ML manufacturers to obtain FDA approval. Depending on interpretation of this vague language, the current approach risks overregulating or underregulating the industry and ultimately causing injury to patients if more clarification is not provided and enforced by the FDA. This article asserts that the current approach creates a “right to explanation” to patients by way of their HCPs, but that merely requiring an “explanation” is insufficient to adequately protect patients and encourage manufacturers to develop better AI/ML algorithms. Finally, this article offers a straight-forward solution that would provide clarity and reduce risk: requiring counterfactual explanations that provide “if-then” statements to identify influential variables to algorithms to HCPs without inundating them with information to review or failing to provide them pivotal information to review before making health decisions with patients.
Journal of Law and Technology at Texas | Austin, TX
Kirby Ammons, Playing Catch-Up: FDA Regulation of AI/ML Clinical Decision Support Software, 6 J.L. & TECH. TEX. 1 (2022–23).