Belling the Cat, Black Box Bureaucracy, & Advisory Burdens in Vet Med
And what we should do about it.
A year or so ago, I had an opportunity to present to the American Association of Veterinary State Boards. And I flubbed it. I mean I totally botched it. I mean to say that I blew it in the same way the Kansas City Chiefs' offensive line blew it.1
I apologized to the director of the organization, who was gracious enough to dismiss the failure as ubiquitous to anyone who has ever presented before. Kind of him. I think I’ve just about scrubbed the egg from my face.
The real shame in it is that I let down the advancement of artificial intelligence in veterinary medicine. I had an opportunity to educate, and I represented the case for AI poorly.
I was eager to read the AAVSB’s white paper on regulatory considerations on the use of artificial intelligence in veterinary medicine, and my response was initially and remains mostly positive. The AAVSB has no regulatory power in the United States or Canada, but rather its role is to develop model regulatory language to help member boards clarify and/or modernize their own laws and regulations.
I think the AI considerations whitepaper is appropriately and, for the most part, wisely tepid. Artificial intelligence is moving at a jaw-dropping pace that even dedicated users and observers struggle to match. It’s the result of an extremely well-funded tech arms race and, I think, it’ll be good for all of us.
The authors do an excellent job of recognizing and advocating for education’s role, and especially continuing education for board members. Further, I think their advocacy for education rather than a punitive approach is a powerful one. It’s got me thinking of reaching out to my own state veterinary board to discuss it.
Nonetheless, I think there are worthwhile discussions to be had about this piece.
Vague, Non-actionable Recommendations
The paper repeatedly states that “licensees should practice due diligence” or “must understand” AI limitations, without providing concrete criteria for what constitutes adequate understanding or due diligence. This creates uncertainty in the advice rather than clarity.
Stance on Informed Consent
Informed consent is a foundational concept in medicine, but I sometimes find its implied and explicit expectations at odds with the reality of modern veterinary practice. Consider anesthesia, for example. If I were to say to a client, “We don’t fully understand the precise mechanism of this anesthetic, but we know that when it's vaporized and delivered at a specific concentration mixed with oxygen, it reliably induces and maintains anesthesia in your pet,” I suspect I would do a lot less surgery.
The truth is, it took me eight years of higher education and multiple licensing examinations to learn how to safely perform a procedure as common and as easy as a neuter. Yet I'm expected to convey the nuances of graduate-level science—spanning chemistry, anatomy, pharmacology, physiology, and physics—in just a few minutes, often to someone who may not have had the opportunity to engage deeply with these subjects.
This isn’t a critique of clients or their intelligence. It’s an acknowledgment of the gap between the complexity of modern medicine and the time and tools we’re given to communicate it. “Informed consent” is a noble goal (and a legal mandate), but in practice it often relies more on trust than on deep understanding. That’s not a failure of the system, necessarily, but it’s something we need to approach with humility and realism. It’s the kind of thing you notice acutely when you talk to clients five or six days a week for ten years.
Utterly mad that we make it a standard of care, but I’ve not got any better ideas.2
On page 16, the document states that informed consent should be obtained when AI assists in “writing or summarizing medical records.” On page 17, the document states that informed consent may not be necessary for “routine, low-risk administrative tasks.” This is a confusing standard, as it does not specify which sort of record-keeping is a core clinical function with significant medical and legal implications, and which sort is a low-risk routine administrative task.
This worries me, and I think it should worry you too. Because I think it sets up the kind of subjectivity that can result in inconsistent, unequal protection and enforcement of regulations. Politically connected, established, eminent, or wealthy practitioners might receive lenient treatment, while newer practitioners with less in the way of resources or reputation, those from underrepresented groups, or those who challenge existing power structures could face stricter scrutiny.
I worry about this. Disciplinary boards would have almost unlimited discretion to determine post hoc what constitutes appropriate AI use.
On page nine, the document acknowledges this, “The determination of how the Licensee has demonstrated due diligence and professional judgment with the use of AI is similar to other clinical decision-making consideration during Board disciplinary discussions.”
Except that veterinarians on the regulatory boards are genuine experts in veterinary medicine and not necessarily so in artificial intelligence. They are all but certainly acting from a position besides expert understanding.
Further and by way of example, my regulatory board in Pennsylvania includes three practitioners who are in direct or very nearly adjacent competition with me. While I’ve never had any reason to doubt their integrity, it’s no guarantee that everyone everywhere has a board with uniformly honorable members. An unethical person or group could eliminate an innovative and legitimate competitive advantage or even prosecute the innovator.
Lessons from Human Medicine
Can we please knock it off with looking to human medicine as a model of care? Honestly, it’s ridiculous. No sane person looks at the regulations around the American healthcare system and thinks, “Wow! Let’s model ourselves after that in order to ensure successful outcomes for our profession and those in our care!”
Failure to address Algorithmic Bias
While mentioning bias, the document doesn’t adequately address how veterinarians should identify or mitigate AI biases. Admittedly, that’s a difficult question to answer given the pluripotency of artificial intelligence. But I believe offering a path, even a hypothetical one, would be appropriate.
On Responsibility
In my disastrous presentation to the AAVSB, Dr. Ryan Appleby was kind enough to offer some challenging questions. One of the questions he posed was the one of responsibility.
This is my favorite question on AI use, and not just because the answer always shocks people. I am a general practitioner and practice owner, I am so absolutely accustomed to absolute responsibility that I barely notice anymore. Of course the burden is mine. Just as it’s mine for the drugs and scalpels and monitors and oxygen tanks and everything else.
Today’s culture is one of avoiding responsibility for anything and everything, but I - and others in roles like mine - simply do not have that luxury. The question is meant, I think, as a kind of Sword of Damocles, to send the faint of heart scurrying away from possible ownership of a mistake. Being told that the sword is bigger, heavier, or sharper than it used to be doesn’t change the responsibility to the person who lives beneath it.
On Communication
There’s an aspect of this whitepaper that rankles, perhaps just me chafing at the idea of imagined authority. From page four of the PDF:
In North America, there are currently no federal premarket approval requirements for AI (defined as Software as a Medical Device, or SaMD) used in veterinary medicine. This may create the false impression that no regulations apply. On the contrary, the responsibility for appropriate use of AI rests entirely with the Licensee, and their decisions and actions are regulated.
And with precisely the same energy and tone, I am inclined to remind the AAVSB that they are not a governing or regulatory body, they possess no authority whatsoever, and no veterinarian in the world answers to the American Association of State Veterinary Boards, this document is a whitepaper, and amounts to little more than a well-intentioned group book report.
Or, maybe, we could all find our way to being a little more professionally respectful of one another and lighten up with the condescension? We could all relax a bit. Seems like the wiser course of action.
Risk of Regulatory Fragmentation
The document acknowledges that there are, at present, no federal premarket approval requirements for veterinary AI, but then suggests that state and provincial boards create their own guidance. Please don’t do that, for the love of innovation and sanity, I cannot imagine a system of regulation that would be worse for innovation than 63 states and provinces each creating their own regulatory landscape. That would be a compliance nightmare and could very well destroy any hope of development and advancement of AI in veterinary medicine.
A lack of evaluation methodology or wisdom on the topic is a reason for ongoing education rather than regulation.
Inadequate Technical Guidance
For a document focused on AI regulation, it provides surprisingly little guidance on evaluating AI system performance metrics, validation requirements, or minimum technical standards.
Referring to the earlier point on “due diligence,” if the body of experts that publishes a whitepaper on regulatory considerations is not comfortable with offering specific guidance on evaluations, why should anybody else feel comfortable doing so?
False Equivalence with Extra-label Drug Use
This is perhaps my biggest problem with the document. In what feels like a desperate lunge for a veterinary analogy, the authors have compared AI use in veterinary medicine to extra-label drug use. Drugs have established pharmacological properties that are fixed, defined, and clearly measurable, whereas AI does not. To wit, even the authors of this paper have not put forth such definitions or standards.
AI systems have probabilistic decision-making and training biases. We do not understand the mechanisms of artificial intelligence in the same way that we understand the mechanisms of action in pharmacology. This inexact analogy has the potential to inspire fear and doubt rather than intellectual rigor and investigation.
Critically and not somewhat ironically, the document fails to propose any concrete regulatory framework that balances innovation with accountability. Instead, it merely shifts the entire burden to practitioners and licensing boards without providing them with the tools to evaluate or utilize these increasingly complex systems effectively.
On the one hand, that’s a moving target and darn near impossible to hit these days. On the other hand, if you can’t measure or evaluate something effectively, why on earth would you see fit to regulate it?
The subjective framework is particularly problematic given the rapid evolution of AI technologies. Without clear, objective standards, regulatory bodies have no reliable way to ensure compliance, creating an environment where enforcement can be arbitrary, selective, or influenced by factors beyond patient or public safety.
What To Do About It
I am, shall we say, strategically impatient when it comes to most things. That is, I would rather meet an issue actively and intentionally rather than waiting for a solution to present itself or deftly avoid any involvement or risk of making a mistake. Sometimes I leave a veterinarian-shaped hole in the wall, sometimes I fall flat on my face in a presentation to the AAVSB, and sometimes I get a little wood on the ball. But I don’t like dodging. I can’t abide feeling like a coward.
I’ll put forth a seemingly radical idea: you don’t have to regulate artificial intelligence,3 but rather we can commit to educating the people using it.
That’s not to say that we never regulate it, but we don’t know enough to regulate it right now. The way to do the most good at this time is education, not regulation.
There’s a particular discomfort that unsettles in my chest when I consider avoiding something altogether. It’s a specific feeling of cowardice, the sort that seeks to justify itself through sophisticated excuses about complexity or concern.
What I’ve come to rely upon is that courage isn’t found in grand declarations or in absolute comprehension. It lives in the simple, often frustrating, process of learning. Step by faltering step. Each one flawed, but each one in a new way. Day by day, they become stronger, surer, more practiced.
In the same of this, I don’t expect mastery. I expect confusion and, in fact, I pursue it. I actively seek out my own weaknesses, my own ignorance, and my own faults. It is uncomfortable.
At first.
But those small revelations add up. Those bright flashes of comprehension are pure joy. And slowly, I’m able to move from a place of ignorance to one of, well, less ignorance. Misconceptions gradually replaced by understanding.
The alternatives include remaining willfully uninformed while AI reshapes my beloved profession or positioning myself in a position of prominence while not avoiding having any genuine and/or well-informed opinions. Both feel, to me, to lack integrity.
So I tinker. I fiddle. I experiment. I read explanations that make little sense. I hunt for gaps in my knowledge. I challenge premises. I make errors that occasionally embarrass, but would be failures if I did not recognize and abide by their instructive value. Each small act of engagement, of mental effort, builds not just comprehension but the confidence to engage further.
This learning isn’t separate from our professional responsibility, this learning is our professional responsibility. The systems will continue evolving whether we understand them or not. Our choice is simply whether I’ll participate in that evolution with diligence and courage or be carried along by currents I’ve refused to acknowledge and study.
There’s something profoundly and honorably human in a willingness to learn what challenges us. It connects us to the generations of professionals who faced technological change with initial reluctance, thoughtful skepticism, and determined adaptation. Courage, commitment, and competence, for many, aren’t in knowing everything immediately, but in refusing to be defined by what we don’t yet understand.
What to do about it? Keep learning.
About the Author: Dr. William Tancredi is a small animal veterinarian, hospital owner, and inveterate tinkerer at the intersection of clinical medicine and emerging technology. Known for his unapologetic candor, unflagging curiosity, and relentless pursuit of clarity, he writes about artificial intelligence, professional responsibility, and the messy, meaningful business of modern veterinary care. He believes in earning trust, learning out loud, and occasionally calling a whitepaper a book report. When he's not operating, writing, practicing, or presenting, he's likely chasing a question that doesn't yet have a good answer—or at least a satisfying one.
Go Birds.
Informed consent often leans too heavily on the word “informed.” Informed consent in the real world isn’t really about information, it’s about trust. But that’s a can of worms I’m saving for a special occasion.
“But we’re a regulatory body, regulating things is what we DO!” Not always, sometimes you regulate, sometimes you educate. This one feels like an opportunity for the latter.
Thank you for this, Dr. Tancredi. Flagging the inconsistent standard of informed consent with the use of AI and the potential for ‘inconsistent, unequal protection and enforcement of regulations’ is an example of how critical it is for today’s decision makers to get it right for tomorrow’s practitioners. Specificity is important, especially for a technology experiencing exponential growth and applications.
Thank you for this well-thought out analysis of the white paper. And special thanks for calling out the frequent comparisons to human health care. Makes me crazy! Their regulatory environment is insane, and I never want to be in their shoes. My husband is a physician, so I live it every day. He thinks we’re crazy whenever we aspire to be like human health care.