Centaurs, Cyborgs, and How to Get Caught Up on AI in Vet Med
AI expert and researcher Ethan Mollick released his latest working paper and it's a terrific read.
Despite a lack of widespread adoption, generative AI is progressing quickly. There’s more and more research out about what it is, what it can do, and how to use it.
The technology’s been called “pluripotent,” a word stolen from biology, where it refers to a cell that can grow into a variety of cell types. That it is not fixed in its developmental potentialities is exactly how we need to consider generative artificial intelligence in veterinary medicine.
But just it’s not enough to provide a sharper scalpel or a better imaging modality, it’s not enough to say, “This is great! Use it!” It helps to offer some basic use cases, encourage trial (and error), and let it grow on its own.
Centaurs and Cyborgs
AI researcher Ethan Mollick has offered two types of AI users in the latest working paper: Centaurs and Cyborgs.
My delight in the mythological and science fiction analogies aside, the terminology is cleverly applied.
Centaurs are users who have clearly defined boundaries between AI functions and human functions. Like the mythological creature, there’s a clear delineation between generative AI and human tasks. In writing on Substack, I often use AI generated images with no alteration or editing from me; in that circumstance, I fall into the centaur category.
Cyborgs, meanwhile, are those users who intertwine the utility of artificial intelligence into a variety of smaller tasks throughout their work. Mollick and others have written about using AI in this way to help with writing or other creative tasks. The way I use AI to organize an article or evaluate (and inevitably note the absence of) segues in my writing is more of the cyborg style.
Dr. Mollick and his colleagues who produced this working paper are quite clear in noting that there are certain cases for which AI has less value and others for which it absolutely cannot be left to its own devices. I think most users of the technology will agree with that, I think it would be an even higher percentage in those advocating for its use in medicine. It is, as ever, an augmentation rather than a replacement. Albeit a powerful one.
I believe these are two easily consumed ideas for how we can use AI in our work, whatever our work ends up being. It’s a way to consider the software in a palatable and easily understood fashion that I hope will lower to the hesitations and barriers to use that it’s faced so far.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c837731-0ae7-4472-be10-8bfa039f3c5c_1024x1024.png)
The Jagged Frontier
Beyond the centaurs and cyborgs, Dr. Mollick seems to have a knack for wonderfully descriptive language, and his coining of the phrase “jagged frontier” is one of my favorites. It reflects the difficulty of classifying elements of generative AI. The things it does well are less consistent than we might expect. The study emphasizes the expanding yet uneven "jagged technological frontier" where AI can either complement or replace human tasks. However, on the frontier's fringes, AI output might not be accurate and might even hamper human performance. As AI's capabilities are still rapidly evolving, discerning this boundary can sometimes be challenging (and sometimes be painfully obvious) and it’s not a static target.
The opportunity for exploration and trial and error is, frankly, fun. It’s a software that has capabilities that it was not specifically created to have (mind-blowing in its own right), it can be used to solve specific problems within a very narrowly defined set of circumstances, and it occasionally produces hallucinations. On the one hand, it’s terrible by the standards of classic deterministic software. On the other, it’s just so cool. It’s like a teleportation device that usually gets you where you want to go, you just have to be paying sufficient attention to know when it hasn’t.
The Best Part
A friend of mine, and Digitail CEO, Sebastian Gabor, introduced me to the newest generative AI earlier this year. One of the things that struck me and I joked about at the time was how it could “make everybody as good as me.”
I spend a lot of time reading about and studying veterinary medicine, beyond just the day to day practice of it. Clinician’s Brief and dvm360 are always among my most-visited webpages (I skip VIN though, message boards are soooo 1900s). I work hard to keep up with medicine and research in order to bring a better quality of care to my patients. The immediately apparent advantage of generative AI is its potential to raise the quality of care provided by the lowest performers.
And research has borne out that instinctive conclusion. Consultants in the study with below average performance improved by a shocking 43% when using AI, with above average performers improving by 17%. Why should we use this technology in veterinary medicine? Because you can take the below average performers and improve medical care by 43%. Because you can take the top performers and improve medical care by 17%.
If “better medicine” isn’t reason enough to explore something, you may be in the wrong line of work.
Now what?
So now we try it out. We test it. We fiddle and tinker and engineer. We interact. We play.
The of veterinary care is ever-expanding. As we navigate this shifting terrain, tools like generative AI not only hold promise but challenge our traditional approaches. Embracing this tech today doesn’t just honor our oath to the lifelong improvement of our professional knowledge and competence, it could also shape a brighter, more efficient tomorrow populated by happier people.