Stanford stepped in just before Christmas when they distributed their COVID vaccine supply according to a homemade algorithm. When the vaccine algorithm was unleashed it invited non-patient facing personnel and only one resident as part of the vaccine’s first phase. It apparently failed because residents as institutional nomads had no ‘home base’ or address that identified them as front-line doctors. Despite a staged media-ready celebration, Stanford instead faced a pre-Christmas, category 5 shiitake show.
From ProPublica:
An algorithm chose who would be the first 5,000 in line. The residents said they were told they were at a disadvantage because they did not have an assigned “location” to plug into the calculation and because they are young, according to an email sent by a chief resident to his peers. Residents are the lowest-ranking doctors in a hospital. Stanford Medicine has about 1,300 across all disciplines.
As a reminder, an algorithm is a series of logical instructions that allow us to complete a task. More specifically, from Hannah Fry in her excellent book, Hello World – Being Human in the Age of Algorithms:
They boil down to a list of step-by-step instructions, but these algorithms are almost always mathematical objects. They take a sequence of mathematical operations – using equations, arithmetic, algebra, calculus, logic and probability – and translate them into computer code.
This failure of the Stanford vaccine algorithm offers insight into algorithms and their limitations. Here are a couple of thoughts.
A vaccine algorithm is only as good as the humans that design it
So what happened here? In the simplest terms…
- Someone wrote it the wrong way. This was faulty human work masked in code.
- Someone else trusted it. Or they didn’t trust it but just gave it the benefit of the doubt.
From MIT Technology Review:
Irene Chen, an MIT doctoral candidate who studies the use of fair algorithms in health care, suspects this is what happened at Stanford: the formula’s designers chose variables that they believed would serve as good proxies for a given staffer’s level of covid risk. But they didn’t verify that these proxies led to sensible outcomes, or respond in a meaningful way to the community’s input when the vaccine plan came to light on Tuesday last week. “It’s not a bad thing that people had thoughts about it afterward,” says Chen. “It’s that there wasn’t a mechanism to fix it.”
Implied in the two steps above is that there is bias built into the creation of any algorithm and definitely in the minds of those looking at it.
Expectations are key to understanding the limits of technology
Building on the second point of failure are our expectations as humans looking at this algorithm. This issue of giving the tool the benefit of the doubt may have been an issue of algorithm aversion. Also from Hello World:
But there’s a paradox in our relationship with machines. While we have a tendency to over-trust anything we don’t understand, as soon as we know an algorithm can make mistakes, we also have a rather annoying habit of over-reacting and dismissing it completely, reverting instead to our own flawed judgement. It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own — even if their own mistakes are bigger.
So beyond understanding how the algorithm is shaped we have to understand our own shortcomings.
Sherry Turkle in Alone Together – Why We Expect More From Technology and Less From Each Other describes our strange human tendency to ‘cover’ for technologies as complicity. We get attached to these things and we like to defend them. (Alone Together is a basic read for understanding our relationship to technology.)
Algorithms won’t fix the dilemma of vaccine distribution
While we want to reduce things to a simple equation that takes humans out of it, the truth is that there is a bunch of human judgment behind who gets the vaccine when. Local standards, values, vaccine supplies, care setting and personnel are among the list of factors that may explain differences in roll-out.
The Stanford vaccine algorithm debacle offers a window into to the future of medical decision making. And as we’ve seen, algorithms don’t provide the sterile objectivity that many imagine.
As we saw here, AI failure will be the 21st century’s the dog ate my homework (h/t to David Armano on Twitter).
If you like this you might like the 33 charts COVID Archives. It’s everything on the site written about COVID. Check it out…
Links to Amazon are affiliate links. Photo by Markus Winkler on Unsplash