Sometimes, treatments can make things worse – but it can be hard to know when that is (or isn’t) the case. Such is the case with sepsis, which kills nearly 270,000 annually in the United States – and which can, sadly, sometimes be accelerated by the very treatments used to slow it. Now, researchers from Microsoft Research, the University of Toronto, Adobe India and MIT have teamed up to apply machine learning to predict when a treatment is a treatment – and when, instead, it’s an impediment.
The machine learning model is specifically targeted at sepsis treatments. Previously, training a model to address sepsis treatments had been difficult because – given the nature of the problem at hand – many paths thought to be less advantageous for the patients were left unexplored during treatment, leading to a dearth of data about which treatments were most ideal. So, instead, the researchers trained the model the other way around, helping it to learn which treatments to avoid in order to prevent a medical “dead end.”
“When we think of dead ends in driving a car, we might think that is the end of the road, but you could probably classify every foot along that road toward the dead end as a dead end,” explained Taylor Killian, a graduate student working in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in an interview with MIT’s Adam Zewe. “As soon as you turn away from another route, you are in a dead end. So, that is the way we define a medical dead end: Once you’ve gone on a path where whatever decision you make, the patient will progress toward death.”
The researchers call this approach “dead-end discovery,” or “DeD”. The negative outcome-focused neural network is supplemented by a positive outcome-focused network that checks the first network’s conclusions. And, so far, the results are promising. “We see that our model is almost eight hours ahead of a doctor’s recognition of a patient’s deterioration,” Killian said. “This is powerful because in these really sensitive situations, every minute counts, and being aware of how the patient is evolving, and the risk of administering certain treatment at any given time, is really important.”
The model was tested on a dataset of septic patients from the Beth Israel Deaconess Medical Center constituting some 19,300 admissions. Based on that dataset, the researchers came to the conclusion that “upward of 11 percent of suboptimal treatments could have potentially been avoided because there were better alternatives available to doctors at those times.”
Of course, the researchers don’t think the model can – or should – replace human clinicians.
“Human clinicians are who we want making decisions about care, and advice about what treatment to avoid isn’t going to change that,” explained Marzyeh Ghassemi, an assistant professor at MIT and head of the Healthy ML group at CSAIL. “We can recognize risks and add relevant guardrails based on the outcomes of 19,000 patient treatments — that’s equivalent to a single caregiver seeing more than 50 septic patient outcomes every day for an entire year.”
To learn more, read the reporting from MIT’s Adam Zewe here and the paper here.
The post New Machine Learning Tool Flags Risky Remedies appeared first on Datanami.
0 Commentaires