Running an algorithm for the management of a well-typed tumor or a child with a septic joint is relatively easy. But managing the expectations of that cancer patient or nervous mother is something more involved. Expectations are how we imagine the future given what we currently know and understand. Patient expectations are part of the human experience of disease.
Framing a problem around experience is uniquely human
What happens when I ask a young mother what she thinks is going on with her child, what I’m looking for is for her to put her child’s process into some context for me. It frames the problem from the perspective of the patient or mother. With that perspective I can then deconstruct elements of what she perceives or build on realities that weren’t evident on the surface of the story. I help her and she helps me. She learns and I learn and we come closer to a common understanding – a level place from which we can approach the problem together.
This idea of framing a problem around experience or perspective is a human phenomenon. And its one of the many problems with medicine by algorithm. The reductionist view of a cleanly defined lesion as nothing other than something to fix ignores the broader context of our experience with disease.
What can a human doctor do better than an AI?
I think a lot about how human healers are evolving with and around the exponential rise of medical technology. And one question that you will see reiterated here on this site is, ‘what is it that humans can do that an artificial intelligence won’t be able to do?’ I think about it a lot because it gets to the core question of what defines us. Helping a patient navigate around what they believe and understand is one of those human capacities.
Going forward, various forms of narrow intelligence (A.I. with the ability to accomplish a narrow, defined set of goals, like playing a board game or driving a car) will play key roles in patient management. But despite how good artificial general intelligence (the ability to accomplish any cognitive task as well as humans) becomes, the management of expectations may be one thing better handled human-to-human.
So what kind of healer do we want?
Thinking through these scenarios get us closer to understanding what makes us special as human healers. Or it starts the conversation about what we might prefer as patients despite the availability of remarkably good artificial general intelligence. Just asking the question of what we want to keep for us begins to move us from the deterministic view of what will happen to what do we want to happen (as originally posed by Max Tegmark in Life 3.0 – Being Human in the Age of Artificial Intelligence).
Despite how smart our machines become, expectations and experiences around life with a medical problem will always be characteristically human.
If you liked this post you should check out our MD Future Archives. This tag marks all the 33c writing that deals with the future of physicians. While I avoid calling myself a futurist, this is the closest thing to predictions and prognostications about doctors. And remember that every post on the site contains carefully curated tags that help you to find related stuff. Just peek right down below the end of the post and find the shaded grey tags. I hope you like what you find!