16 Comments
Jul 15, 2023Liked by Dan Elton

I think you're missing part of the story here. Insurance companies rely on FDA approvals to decide what "medical devices" they will reimburse. FDA doesn't touch anything that doesn't make specific health claims, so patient facing chatbots come with no guarantees but also can't charge a lot because of this. The top AI companies are fine with the current setup because they'll get the approvals and regulatory barriers will prevent competitors from making any real money.

Expand full comment
author

That's a great point. This gets to a broader question of how AI will be paid for (right now hospitals are struggling to figure out how to pay for it).

I think a good solution might be to make FDA approval an option for companies (post market), to help them win insurance company approval (premarket approval would only be required for very high risk AI). The first draft actually suggested something like this, but I removed it (now I'm thinking I should have left it in).

I'm not an expert on insurance but insurance companies must systems for deciding which doctors and procedures they will cover for different conditions. Maybe they can port some of those systems over?

It's been suggested that decoupling the FDA and CMS and insurance coverage would be helpful. (see https://twitter.com/seehafer/status/1465505818673963012?s=20)

Expand full comment

Definitely, but AFAIK FDA approval is a prerequisite, especially for what Medicare will cover. If you make AI "devices" an exception to that, it opens a whole can of worms.

Expand full comment
Jul 15, 2023Liked by Dan Elton

Correct. Our entire biomedical product ecosystem relies on FDA’s imprimatur of something as “safe and effective”. Though clearance or approval is not a guarantee of reimbursement, it is necessary for anything beyond a low risk medical device (and FDA doesn’t consider anything AI “low risk”).

Expand full comment

Strong work here, Dan. It seems like we're at a "damned if you do, damned if you don't" phase for regulating AI in medicine. The last thing we want to do is throw out the baby with the bathwater, and yet we have a strong obligation to prevent horrible things from happening.

I think they're going to happen, and we're going to deal with them and learn from them, but it's going to be really ugly for a few transitional years. I love the idea of having a dedicated "clinical AI" type department, and I do see this happening, but not right away. The interim is going to be very chaotic.

Expand full comment
author

It's already pretty chaotic and is going to become much more so with general purpose multimodal foundation models rapidly advancing. That is part of the impetus of writing this post - it's useful thinking about these future growing pains now. The worst case would be to lock-in bad systems that don't anticipate future AI advances and usage demand.

Right now very few people have a good grasp of what systems and applications require FDA approval vs which do not. Additionally hospitals are relying too heavily on the FDA stamp to ensure the AI they are using is safe rather than doing the necessary work of setting up their own systems for AI monitoring which they are undoubtedly going to need in the future.

Expand full comment

I had a good conversation with a PhD who works in the Psychiatric and Behavioral Genetics and Department of Human Genetics here at a local university. She's pretty smart.

Dana called attention to some of the same things you did here, but from a different perspective; you're certainly welcome to peruse if you'd like:

https://goatfury.substack.com/p/a-candid-conversation-on-ais-transformative

To the point: I am really glad you're talking and thinking about this issue, and I want to help amplify it a tad here!

Expand full comment
author

Interesting, I read your post and I agree with the general points being made.. I agree that AI can bring a lot of value in shifting from reactive to more preventative healthcare. It's not that we don't know how to do preventative healthcare, it's just that doctors are overworked right now and don't have time for it. AI can fill that gap. AI can also help making notes and reports more consistent and comprehensive.

If you haven't already seen, check out my previous post on issues being faced : https://moreisdifferent.substack.com/p/ai-deep-learning-radiology-medicine-overhyped?s=w I will note that since it was written I've had to update my overall outlook in light of recent advances in multimodal foundation models.

Expand full comment

I love this: the idea that AI can free up time for doctors. That's exactly how I see the strongest value-add today: if you have a job where an LLM or related technology can do some sort of busywork for you, it's like having a very low-level personal assistant around at all times. The PA isn't going to take your job; they're going to make your job way, way better all around.

I'm opening up your previous piece now in a new tab to read in a few!

Expand full comment
author

I have a TEDx talk on AI for preventative medicine but I wasn't super happy with how it came out since they bumped me to the end of the program and I was super tired/fatigued when I gave the talk: https://www.youtube.com/watch?v=XNN7KYFS3Jo

Expand full comment

Dan, I understand - what a cool opportunity, though!

I agree with your concept here, and I want to help share the idea a bit. It's the way I see things too. If you're interested in shooting me some thoughts, perhaps we could do a simple collaborative piece (I can publish it and tag you) with the idea of spreading this message a bit. Just message me via email if you'd like.

Expand full comment
Jul 15, 2023Liked by Dan Elton

Excellent piece, Dan. I am currently writing a study for ASU law school on how the FDA is approaching AI governance, so this is very helpful. But I was surprised that you didn't spend more time talking about the FDA's important "Predetermined Change Control Plan for AI/ML Enabled Device Software Functions." I see a mention in fn 8, but I am wondering if you have written more about that because I think it represents in important evolution in the agency's approach to these technologies. I am skeptical it will work in the long run (because I can't really see how innovators will be able to accurately predetermine the many potential downstream uses/applications of their AI/ML tools well into the future as the FDA asks) but I think that the agency deserves some credit for at least trying to think outside the box (a bit) compared to their past approach on only technologies. Do you have more thoughts on this? Or perhaps I could interview you on background for my study? -- Adam Thierer

Expand full comment
Jul 15, 2023Liked by Dan Elton

BTW, on a loosely related note, I've been impressed how, relative to other agencies, the FDA was closely tracking and reporting its AI/ML-related actions and determinations via a single portal. But then suddenly last summer, they stopped doing so. I have not been able to figure out why. Perhaps someone there left or retired and no one else took over doing the updates. I dunno. But here is the portal, and if you or anyone else knows what is going on, let me know!

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

Expand full comment
author

Yeah I noticed that too. I wouldn't read too much into it. The additional transparency of the portal is refreshing though.

Expand full comment
author

I don't have any specific thoughts. I agree that relative to other areas it seems the FDA has been moving relatively quickly to support AI. I agree the agency deserves credit here. It's definitely a good step and may allow fine tuning to deal with distribution shift. I'm skeptical if it can properly support things like unsupervised fine-tuning of LLMs.

Expand full comment

“Unlike drugs and vaccines, which are mostly over-regulated,”

I’m dying from recurrent / metastatic squamous cell carcinoma of the tongue, and would argue that for fatal diseases like mine the FDA absurdly overregulates: https://jakeseliger.com/2023/07/22/i-am-dying-of-squamous-cell-carcinoma-and-the-treatments-that-might-save-me-are-just-out-of-reach/

Expand full comment