The AI Will See You Now

New screening tools can predict which ER patients are most at risk for PTSD

CMRF Crumlin (Flickr/crmf_crumlin)
CMRF Crumlin (Flickr/crmf_crumlin)

Every year, 30 million Americans visit hospital emergency rooms after experiencing traumatic injuries. Most of these patients do not suffer lasting psychological consequences, but perhaps 20 percent develop anxiety, depression, or post-traumatic stress disorder (PTSD) after leaving the ER. Timely and compassionate interventions would help re-center these patients’ lives, but the trick is predicting whether an ER patient will exhibit PTSD and require mental health support after discharge.

“In psychiatry, it’s complex to diagnose illness and to predict who will become sick,” says Columbia University psychiatrist and data scientist Katharina Schultebraucks. There are no clear signs of psychiatric challenges before their onset—unlike, say, diabetes, for which high blood sugar is a strong predictor of risk. That knowledge gap may be starting to close, because Schultebraucks and colleagues recently showed that artificial intelligence can predict which patients are most at risk of developing PTSD.

Schultebraucks’s team built an AI tool that combines two different sources of information: physical data, such as blood pressure and red blood cell count, routinely collected when people enter the ER for traumatic injuries; and patients’ responses to four statements about their injury, such as “I felt confused” or “I get upset when something reminds me of what happened.” Patients can say whether each statement is not true, sometimes true, or often true. The combined model predicted incidence of PTSD better than ER data or screening questions could alone.

Schultebraucks and colleagues followed 377 traumatic injury patients in two emergency rooms, one in Atlanta and one in New York City, for several years. (They used most of the patient records in Atlanta to build the AI model and every patient record in New York to test it.) Their final model correctly predicted 90 percent of the patients who developed significant PTSD within a year of entering the hospital, and it also accurately predicted the 83 percent of patients who did not.

The screening items are part of a longer tool called the Immediate Stress Reaction Checklist. Schultebraucks says that although the entire checklist cannot be completed during an ER admission, doctors are often able to ask patients four quick questions.

“Our results are very promising, but of course, before this can be used in clinical practice, we need to be sure that this algorithm works in all kinds of contexts and scenarios,” Schultebraucks says. Determining which factors in their model can most accurately predict PTSD development will also take further research.

One general concern with AI applications, beyond the PTSD screening tool used in health care, is that they could reflect, even perpetuate, certain racial biases. Even if the designers of AI tools do not mean to amplify bias, the underlying assumptions within their tools sometimes can.

New York University health anthropologist Kadija Ferryman cites one recent example of such an unintended consequence. An AI tool that a hospital system used to identify patients with the greatest health needs claimed that Black patients were healthier than white patients, when in fact the opposite was often true. The tool assumed that previous health spending correlated with risk; in other words, if one patient’s care cost less money than another patient’s, the former was deemed healthier. Although the care of Black patients did usually cost less than that of white patients, this was because the Black patients had insurance plans that led them to seek fewer health services, not because they were healthier. The AI wasn’t smart enough to make this distinction.

One possible risk with the PTSD screening tool is that it could over- or under-predict the risk of PTSD depending on someone’s race. To guard against this possibility, Ferryman suggests that its creators test and report the results of future iterations of the tool among patients of different racial groups. Not all AI tools are created equal—just as bias can be inadvertently baked into these tools, researchers can proactively stamp it out.

 

 

Permission required for reprinting, reproducing, or other uses.

Marcus A. Banks is a freelance journalist.

● NEWSLETTER

Please enter a valid email address
That address is already in use
The security code entered was incorrect
Thanks for signing up