Is artificial intelligence coming for your job?
The risks of AI getting it wrong with a patient.
“Hey, Siri, will artificial intelligence take the place of the anesthesiologist?” That question was at the heart of Monday’s “Artificial Intelligence and Machine Learning for Anesthesiologists,” presented by Christopher W. Connor, MD, PhD, Anesthesiologist at Brigham & Women’s Hospital, Harvard Medical School in Boston.
Dr. Connor described how machine learning allows decision-making algorithms to emerge from simple mathematics. He also differentiated between current successful applications of AI versus requirements for clinical use.
To date, there has been recent astonishing progress with language translation, image recognition, natural speech processing, textual analysis, and self-learning. “Are we on the upswing or have we plateaued?” he asked.
He provided the example of the first line of the great American novel if he were to write it:
His car was his car, and her car was her car.
Predicting a global bestseller, he put his first line into Google Translation to show what his first line in French would be:
Sa voiture était sa voiture et sa voiture était sa voiture.
With a simple translation into French, he said, “Google just killed the great first line of my bestseller” because it took the literalness of the translation and did not respect the nuances of the language or the poetry of his prose. With continued learning, language translations might improve on this literal word-for-word type of translation.
So, he asked, when does AI work best?
Classical AI works best when the problem is well-defined and closed, such as with games. In a game, he said the range of possible actions can be reasonably easy to enumerate and the outcomes are reasonably easily valued. In chess, for example, he said there are 20 possible moves for white and 20 possible moves for black.
This doesn’t describe anesthesiology. Anesthesiology is a pressured cycle of interpretation, physical action, and response rather than any single cognitive act, he pointed out. For instance, in a simple and straightforward patient situation, his knowledge base could inform his clinical judgment to determine the most likely outcome successfully. However, if the patient’s situation is complicated, he might base his clinical judgment on whether the patient resembles someone he might have taken care of once in his 15-year career. In this instance, he is disregarding the majority of his knowledge base for his clinical judgment.
He went on to present questions about AI in anesthesiology, suggesting “machine-assisted discovery” may be more likely than classic machine learning.
Although there is an outcome that should either be attained or avoided, it is not certain what factors lead to that outcome, and a clinical test that predicts that outcome can’t be designed. The available data provides circumstantial evidence of that outcome.
Dr. Connor said, “The signal is too diffuse across the dataset for it to be learned reliably from a small number of cases. Also, the clinical decision-making relies upon a subconscious judgment that the anesthesiologist can’t elucidate.”
He went on to explain how machine learning and neural networks evolve with principal component analysis or dimensional reduction.
He posed the question, if there is to be AI in the OR, how much? He used the idea of human decision-making and loops to illustrate his answer. “In the loop – human involvement is necessary in order for the process to occur – is where we are today,” he said. A midpoint might be on the loop, in which automated processes perform the majority of the work, and humans verify the process. The final is out of the loop, in which machine accuracy is sufficiently accurate and self-correcting so that the process can run unmonitored.
He said he was hard pressed to think of anything in the OR that could truly be out of the loop, except perhaps line isolation.
“I don’t think robots are coming to take our jobs,” he said, in response to a question from the audience.
In anesthesiology, there are so many inputs and judgments. He also pointed at regulatory and legal hurdles that stand tall against full AI implementation.
“What if AI is wrong with a patient? Whose fault would it be? Are we willing to roll the dice?”
Visit Anesthesiology Today Annual Meeting Edition for more articles.