The Algorithm is in

separator


by Shawn Meehan, Global Head of Legal Solutions at Guidepoint

How soon will artificial intelligence transform health care? The U.S. Food & Drug Administration (FDA) seeks to answer that question over the coming months. The implications for patients, doctors, medical device manufacturers, and lawyers could be significant.

 

Earlier this year, the FDA issued a discussion paper that considers “a new regulatory framework specifically tailored to promote the development of safe and effective medical devices that use advanced artificial intelligence algorithms,” according to FDA Commissioner Scott Gottlieb, MD. In essence, the FDA wants to take Software as a Medical Device (SaMD), as in software that is not part of a hardware medical device, to the next level using artificial intelligence-based and machine learning-based technologies that learn from real-world feedback and improve performance over time.

 

The FDA already has permitted the use of artificial intelligence algorithms to screen for diseases and conditions, as well as to provide treatment recommendations. The regulatory framework under discussion would go beyond these uses.

 

To date, the FDA has received over 100 comments. While most of the comments were broadly supportive, several recommendations were made, including:
  • Expanding the scope of this framework to cover Software in a Medical Device (SiMD), as in software sitting within a hardware medical device (see GE’s comments)
  • Narrowing the scope of this framework to focus exclusively on machine learning technology (see AMA’s comments)
  • Ensuring proper controls are in place for more accurate test results (see Anthem’s comments)

 

While this discussion did not directly address fallout from the failure or misuse of AI by medical practitioners, legal and medical journals have been rife with discussions about liability. Some have argued that, almost by definition, the use of new medical technology will deviate from standard practice – a key prong in medical malpractice – so medical practitioners will need to adopt such technologies incrementally. Over time, these devices will perform more tasks and behave more like physicians, offering highly accurate data to be interpreted rather than merely followed. Such technologies will then become standard practice (see the Journal of Legal Technology). Others have argued that new legal models will need to be introduced since both medical malpractice and products liability are fundamentally insufficient to account for the autonomous and often opaque nature of AI (see AMA Journal of Ethics).

 

Lawyers should take heed of these developments. Even with FDA action, the intersection of AI and healthcare will present ongoing legal challenges

 

Please note: This article contains the sole views and opinions of Shawn Meehan and does not reflect the views or opinions of Guidepoint Global, LLC (“Guidepoint”). Guidepoint is not a registered investment adviser and cannot transact business as an investment adviser or give investment advice. The information provided in this article is not intended to constitute investment advice, nor is it intended as an offer or solicitation of an offer or a recommendation to buy, hold or sell any security. Any use of this article without the express written consent of Guidepoint and Shawn Meehan is prohibited.

 

 

Learn how Guidepoint Legal Solutions can provide expertise throughout the litigation cycle.