AI Spots Hidden Signs of Consciousness in Comatose Patients before Doctors Do

AI Spots Hidden Signs of Consciousness in Comatose Patients before Doctors Do

A machine-learning algorithm spotted signs of “covert consciousness” in coma patients—in some cases, days before doctors could do so

Join Our Community of Science Lovers!

Imagine lying in a hospital bed, awake but unable to move your body to communicate with the people around you. This experience of “covert consciousness” is a reality for many people who have sustained traumatic brain injuries. In a new study published in Communications Medicine, researchers found that they could detect signs of consciousness in comatose patients by using artificial intelligence to analyze facial movements that were too small to be noticed by clinicians.

Covert consciousness was first detected in 2006, when researchers asked an unresponsive woman and healthy volunteers to imagine doing specific tasks while in a brain scanner. The team found that the woman showed brain activity in the same regions as the volunteers. Just last year researchers using similar brain imaging methods found that one in four behaviorally unresponsive patients was covertly conscious. Such tests aren’t routinely performed on people in an unresponsive state because this type of neuroimaging is time-consuming and its operation requires specialized skills. Instead, doctors typically rely on more subjective visual examinations to gauge a person’s level of consciousness, testing whether they open their eyes, respond to commands or startle at a loud noise.

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Mofakham and her team recorded videos of 37 patients with recent brain injuries who outwardly appeared to be in a coma. They tracked the participants’ facial movements in extraordinary detail—on the level of individual pores—after they were given commands such as “open your eyes” or “stick out your tongue.” The researchers’ AI tracking tool, which they call SeeMe, detected any facial movements and then analyzed whether the movements were specific to the command given. SeeMe documented eye-opening responses in 30 of 36 patients and mouth movements in 16 of 17 patients with analyzable videos. Five of these patients did not go on to produce larger signs of movement that were visible to doctors, though most of the others did. On average, SeeMee detected participants trying to open their eyes and moving their mouths respectively 4.1 and 8.3 days before clinicians spotted these signs.

Next, Mofakham plans to examine whether patients can answer yes-or-no questions using specific facial movements. “This has a big ethical implication” because people who cannot communicate “cannot participate in their care,” she says. “This study opens a way to communicate with these patients.”

Andrew Chapman is a Truckee, Calif.–based freelance science writer who covers life sciences and the environment.

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you , you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, , must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American