The Bold Possibilities of AI in Vet Med
AI use in veterinary medicine represents a powerful, crucial, and pluripotent technology that can dramatically improve the practice of medicine.
Medicine itself is at a moment of crisis. As a profession, for all the extraordinary advances in the art and science of medicine in the last four decades, we have too often failed our patients.
Eric Topol, MD
A recent article in dvm360 describes a lecture by Dr. Eli B. Cohen presented at the 2023 ACVIM Conference in Philadelphia. Admittedly, I was not in attendance for Dr. Cohen’s lecture, so my knowledge - and subsequent criticism - is based entirely on what’s provided in the article. If the article proves to not be representative of the tone and content of Dr. Cohen’s lecture, I will apologize to Dr. Cohen, the retract or perhaps amend this essay.
On the surface, the lecture seems to present valid concerns about the use of artificial intelligence in the field of veterinary medicine. A more nuanced understanding reveals it to be an incomplete portrayal, frequently citing the potential for human error and over-reliance and overlooking the numerous benefits of AI as well as certain key issues. It is essential to evaluate AI (or anything really) not just through the lens of fear, but as a tool with significant potential that also calls for thorough understanding, thoughtful regulation, and deliberate ethical considerations.
The speaker, Dr. Eli B. Cohen, seems to emphasizes the risk of over-reliance on AI, potentially missing critical health issues in animals, and the lack of regulation in this sphere. These are indeed areas that require attention as they would with the use of any technology or tool used in medicine or, frankly, anything else. However, the article leans heavily towards the challenges without adequately addressing the immediate and potential value that AI brings to the table. We need not to be afraid of the technology, but rather understand and embrace it as a tool in helping us to solve our problems.
Artificial intelligence presents a revolution of a tectonic scale to the veterinary field by providing accurate and rapid diagnostic, predicting disease outcomes, and enhancing the efficiency of veterinary practice. It is contributing to a transformational change in the field of medicine, augmenting the capabilities of veterinary professionals and, ultimately, improving animal health and welfare. Over-reliance on any tool, whether it be a stethoscope or an AI algorithm, can have the potential for errors.
The lecture as described by the article also appears to overlook the fact that AI does not replace the practitioner but supports them. The art of veterinary medicine relies heavily on the professionals' clinical acumen and experience, which - even and especially with the use of artificial intelligence - remains paramount. AI is meant to assist and augment, not replace. The narrative should be about human-AI collaboration rather than competition, enhancing the practitioner's ability to make informed decisions.
While the current dearth of regulation for AI tools in animal care is a concern, it does not necessarily mean that all AI tools are universally prone to errors or that they are limited to novelty use. There is a need for a regulatory framework to ensure efficacy, safety, and ethical use, which is a broader issue that goes beyond veterinary medicine. This challenge of lectures should prompt constructive dialogue about creating such regulations, rather than promoting the a culture of fear. A clinician with an over-reliance on software tools is not that different that from a surgeon chirping “when in doubt cut it out” for every case. The risk to the patients is profound, but it does little for the rest of us besides helping us to identify the most mediocre minds among us.
As for the ‘conscience of AI’, it is important to note that AI is a tool, and ethical considerations lie in the hands of its developers and users. Software has no more conscience than does my stethoscope or scalpel blade. Highlighting the potential for 'AI without a conscience' propagates unwarranted anxiety that does not foster learning, curiosity, or development. AI, like any technology, has no intrinsic morality, but is operated based on the principles and guidelines set by its developers and users.
One crucial aspect not addressed in the article is the potential for 'AI hallucination', where AI sees or creates patterns that don't exist. This limitation should be recognized and managed with stringent testing and guardrails. Also, privacy issues are another significant concern in AI applications, especially regarding data handling and sharing in medical settings. It is critical that these areas be considered in any discussion about AI implementation in veterinary medicine and the absence of discussion of these factors does not impart a sense of a thorough understand of current AI software.
Geoffrey Hinton, Google’s recently resigned “godfather of AI,” said ““I think that if you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.” He said that in 2017, and he was wrong. The technology isn’t as good or developed as fast as the coders believed it would be. More accurately, I believe, it isn’t as good as promised yet.
Radiologists like Dr. Cohen will see a dramatic change to their corner of our profession in the years to come. Their work will likely become more interesting and less rote. They will see fewer “easy” cases each day and their expertise will become more important for the more difficult and more complex cases. Their work will be enhanced by AI software. I doubt they will ever be completely replaced, although it is perhaps worth noting that 31% of American radiologists have faced a malpractice claim, with “missed” diagnosis being the most common complaint.
While it is important to highlight and discuss the potential challenges and dangers that come with AI use in veterinary medicine, it should not overshadow the extensive benefits that this powerful and pluripotent technology offers. Extensive testing and evaluation of artificial intelligence software is not merely merited, it is actively under way. Similarly, the absence of stringent regulations should prompt efforts for their creation rather than inducing fear, another endeavor (the EU passed regulations just last week). Obstacles and challenges are calls for education, effort, and innovation rather than vague and foreboding warnings that lack depth of understanding. As we move forward, a balanced perspective will facilitate the constructive and beneficial use of AI in veterinary practice, leading to improved animal healthcare outcomes. It is our responsibility as veterinarians not to ignore this opportunity, but to embrace it as a way to better the practice of medicine.
Thanks for this measured perspective (I expected a good read to follow an epigraph from Eric Topol). I appreciate the categorization of these new and developing tools alongside the scalpel and the stethoscope. Hominids co-evolved with technologies including language, and I think AI continues that foundational relation. Human intelligence has always been, in some essential way, artificial intelligence. So I think you're right that the stakes are mainly to be found in the social relations that implement the tech and not the tech itself (though it is worth considering the energy demands for running thousands of GPUs). I've come to regard the fears surrounding generative LLMs—especially the most extreme dystopian scenarios—as a PR strategy. Why else would the likes of Sam Altman and Elon Musk pronounce such stark warnings while continuing to work on and invest in it?
"The lecture as described by the article also appears to overlook the fact that AI does not replace the practitioner but supports them."
Well, that is how it *SHOULD* be used. Based on my previous work experiences, I can say that is not always how the C-suite views it or how it is implemented on the ground......... 👀