2 Comments
User's avatar
Eric Fish, DVM's avatar

The possibility of AI influencing clinical specialists is an interesting one. I previously wrote about a different (albeit older, 2021, which is ancient in AI timelines) study that found essentially the *opposite*: that non-experts (ER and IM docs) were more susceptible to incorrect advice when told it was AI-based, while radiologists were highly resistant and able to see through the errors (so-called algorithmic avoidance)

https://open.substack.com/pub/allscience/p/5-minute-paper-do-as-ai-say?r=1nnpgl&utm_medium=ios&utm_campaign=post

This has important implications because many of the radiology AI start-ups are actually NOT marketed for use within radiologist workflows, but rather to cut them out and directly provide advice to general practitioners. In my view, that does put a much higher burden of proof in terms of accuracy and guardrails.

I have much less of an issue with assistive functions like already on-market IDEXX radiology AI (disclaimer, my former employer, though I was not involved in those products in any way) that automated measurement of a VHS score or TPLO measurements. This saves time and improves accuracy, can be accessed by either the radiologist or the GP, and critically, can be overridden by humans if it appears like flawed landmark assessment created an incorrect measurement

Expand full comment
William Tancredi, DVM's avatar

This kinda makes me think that we need to figure out how to incorporate AI advice into our workflow and decision-making processes. We need to know how we will use it, especially because it will constantly be evolving as we do.

Expand full comment