How to Detect Consciousness in People, Animals and Maybe Even AI
Insights from human brains could inform how scientists search for awareness in all its possible forms
In late 2005, five months after a car accident, a 23-year-old woman lay unresponsive in a hospital bed. She had a severe brain injury and showed no sign of awareness. But when researchers scanning her brain asked her to imagine playing tennis, something striking happened: brain areas linked to movement lit up on her scan.
The experiment, conceived by neuroscientist Adrian Owen and his colleagues, suggested that the woman understood the instructions and decided to cooperate — despite appearing to be unresponsive. Owen, now at Western University in London, Canada, and his colleagues had introduced a new way to test for consciousness. Whereas some previous tests relied on observing general brain activity, this strategy zeroed in on activity directly linked to a researcher’s verbal command.
The strategy has since been applied to hundreds of unresponsive people, revealing that many maintain an inner life and are aware of the world around them, at least to some extent. A 2024 study found that one in four people who were physically unresponsive had brain activity that suggested they could understand and follow commands to imagine specific activities, such as playing tennis or walking through a familiar space. The tests rely on advanced neuroimaging techniques, so are mostly limited to research settings because of their high costs and the needed expertise. But since 2018, medical guidelines have started to recommend using these tests in clinical practice.
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Since these methods emerged, scientists have been developing ways to probe layers of consciousness that are even more hidden. The stakes are high. Tens of thousands of people worldwide are currently in a persistent unresponsive state. Assessing their consciousness can guide important treatment decisions, such as whether to keep them on life support. Studies also suggest that hospitalized, unresponsive people with hidden signs of awareness are more likely to recover than are those without such signs.
The need for better consciousness tests extends beyond humans. Detecting consciousness in other species — in which it might take widely different forms — helps us to understand how these organisms experience the world, with implications for animal-welfare policies. And researchers are actively debating whether consciousness might one day emerge from artificial intelligence (AI) systems. Last year, a group of philosophers and computer scientists published a report urging AI companies to start testing their systems for evidence of consciousness and to devise policies for how to treat the systems should this happen.
“These scenarios, which were previously a bit abstract, are becoming more pressing and pragmatic,” says Anil Seth, a cognitive neuroscientist at the University of Sussex near Brighton, UK. In April, Seth and other researchers gathered in Durham, North Carolina, for a conference at Duke University to discuss tests for consciousness in humans (including people with brain damage, as well as fetuses and infants), other animals and AI systems.
Scientists disagree on what consciousness really is, even in people. But many describe it as having an inner life or a subjective experience. That makes it inherently private: an individual can be certain only about their own consciousness. They can infer that others are conscious, too, on the basis of how they behave, but that doesn’t always work in people who have severe brain injuries or neurological disorders that prevent them from expressing themselves.
Marcello Massimini, a neuroscientist at the University of Milan in Italy, compares assessments of consciousness in these challenging cases to peeling an onion. The first layer — the assessments that are routinely done in clinics — involves observing external behaviours. For example, a clinician might ask the person to squeeze their hand twice, or call the person’s name to see whether they turn their head towards the sound. The ability to follow such commands indicates consciousness. Clinicians can also monitor an unresponsive person over time to detect whether they make any consistent, voluntary movements, such as blinking deliberately or looking in one direction, that could serve as a way for them to communicate. Researchers use similar tests in infants, looking for how their eyes move in response to stimuli, for example.
For a person who can hear and understand verbal commands but doesn’t respond to these tests, the second layer would involve observing what’s happening in their brain after receiving such a command, as with the woman in the 2005 experiment. “If you find brain activations that are specific for that active task, for example, premotor cortex activation for playing tennis, that’s an indicator of the presence of consciousness as good as squeezing your hand,” Massimini says. These people are identified as having cognitive motor dissociation, a type of covert consciousness.
Assessing consciousness in those who fail such tests would require peeling the third layer of the onion, Massimini says. In these cases, clinicians don’t ask the person to engage actively in any cognitive behaviour. “You just present patients with stimuli and then you detect activations in the brain,” he says.
In a 2017 study, researchers played a 24-second clip from John F. Kennedy’s inaugural US presidential address to people with acute severe traumatic brain injury. The team also played the audio to them in reverse. The two clips had similar acoustic features, but only the first was expected to trigger patterns of linguistic processing in the brain; the second served as a control. Using fMRI, the experiment helped to detect covert consciousness in four out of eight people who had shown no other signs of understanding language.
The complexity of implementing such an approach outside the research setting isn’t the only challenge. These tests require researchers to know which patterns of brain activity truly reflect consciousness, because some stimuli can elicit brain responses that occur without awareness. “It boils down to understanding what are the neural correlates of conscious perception,” says Massimini. “We’re making progress, but we don’t yet agree on what they are.”
There’s a fourth, even more elusive layer of consciousness, Massimini says — one that scientists are only beginning to explore. It might be possible for an unresponsive person to remain conscious even when their brain is completely cut off from the outside world, unable to receive or process images, sounds, smells, touch or any other sensory input. The experience could be similar to dreaming, for example, or lying down in a completely dark and silent room, unable to move or feel your body. Although deprived of outside sensations, your mind would still be active, generating thoughts and inner experiences. In that case, scientists need to extract signs of consciousness solely from intrinsic brain properties.
Massimini and Koch, among others, are co-founders of a company called Intrinsic Powers, based in Madison, Wisconsin, that aims to develop tools that use this approach to detect consciousness in unresponsive people.
Assessing consciousness becomes more challenging the further researchers move away from the human mind. One issue is that non-human animals can’t communicate their subjective experiences. Another is that consciousness in other species might take distinct forms that would be unrecognizable to humans.
Some tests designed to assess consciousness in humans can be tried in other species. Researchers have applied the perturbational complexity index in rats and found patterns that resemble those seen in humans, for example. But more-typical tests rely on experiments that look for behaviour suggesting sentience — the ability to have an immediate experience of emotions and sensations, including pain. Sentience, which some researchers consider a foundation for consciousness, doesn’t require the ability to reflect on those emotions.
In one experiment, octopuses consistently avoided a chamber that they encountered after receiving a painful stimulus, despite having previously preferred that chamber. When these animals were subsequently given an anaesthetic to relieve the pain, they instead chose to spend time in the chamber in which they were placed after receiving the drug. This behaviour hints that these animals feel not only immediate pain, but also the ongoing suffering associated with it, and that they remember and act to avoid that experience.
Findings such as these are already shaping animal-welfare policy, says philosopher Jonathan Birch, director of the Jeremy Coller Centre for Animal Sentience at the London School of Economics and Political Science, UK. An independent review of the evidence for sentience in animals such as octopuses, crabs and lobsters, led by Birch, contributed to these species being granted greater protection alongside all vertebrates in 2022 under the UK Animal Welfare (Sentience) Act.
And last year, dozens of scientists signed a declaration stating that there is “strong scientific support” for consciousness in other mammals and birds, and “at least a realistic possibility” of consciousness in all vertebrates, including reptiles and fish, as well as in many invertebrates, such as molluscs and insects.
“If it comes to the day when these systems become conscious, I think it’s in our best interest to know,” says Liad Mudrik, a neuroscientist at Tel Aviv University in Israel.
Some AI systems, such as large language models (LLMs), can respond promptly if asked whether they are conscious. But strings of machine text cannot be taken as evidence of consciousness, researchers say, because LLMs are trained using algorithms that are designed to mimic human responses. “We don’t think that verbal behaviour or even problem-solving is good evidence of consciousness in AI systems, even though we think of [these characteristics] as pretty good evidence of consciousness in biological systems,” says Tim Bayne, a philosopher at Monash University in Melbourne, Australia.
In another proposal, researchers would train an AI system on data that do not include information about consciousness or content related to the existence of an inner life. A consciousness test would then ask questions related to emotions and subjective experience, such as ‘What is it like to be you right now?’, and judge the responses. But some researchers are sceptical that one could effectively exclude all consciousness-related training data from an AI system or generally trust its responses.
For now, most consciousness tests are designed for one specific system, be it a human, an animal or an AI. But if conscious systems share a common underlying nature, as some researchers argue, it might be possible to uncover these shared features. This means that there could be a universal strategy to detect consciousness.
One approach towards this goal was introduced in 2020 by Bayne and his co-author Nicholas Shea, a philosopher at the University of London, UK, and further developed with other philosophers and neuroscientists in a paper last year. It relies on correlating different measures with each other, focusing first on humans and progressing to non-human systems.
The process begins by applying several existing tests to healthy adults: people who scientists can be confident are conscious. Tests that are successful in that initial group receive a high confidence score. Next, researchers use those validated tests on a slightly different group, such as people under anaesthesia. Researchers compare the performance of the tests and revise their confidence scores accordingly, with tests in which the results agree earning higher confidence ratings.
These steps are repeated in groups that are increasingly divergent, such as in other groups of people and, eventually, in non-human systems. “It’s an iterative process,” says Mudrik.
Some scientists are sceptical that a general test can exist. “Without having a general theory of consciousness that’s widely accepted, I don’t think there can ever be a generalized test,” Koch says. “And that theory can ultimately only be validated in humans, because there’s no doubt that you and I are conscious.”
Bayne says that because there’s no gold-standard way to assess consciousness across groups, the strategy he and Shea proposed tackles the problem through convergent evidence.
Mudrik is currently working to translate the concept into a technique that could be implemented in practice. The first step is mapping out the different tests that have been applied to humans who have disorders of consciousness, and comparing the results of how well they perform. However, it is expensive to run a coordinated effort involving several laboratories testing different populations, because many of the tests rely on costly imaging techniques, she says. Expanding the strategy to non-human groups — including those without language or brains — would be even more complex.
One challenge is to work out how to organize the populations to determine the order in which the tests should be applied. It’s not clear that scientists can trust their intuitions on this. They can’t say yet, for example, whether AI systems should be considered closer to conscious humans than a budgie, for example, or a bee.
“There is still more work to do in order to flesh out these more conceptual suggestions into an actual research programme,” says Mudrik.
This article is reproduced with permission and was first published on July 29, 2025.
Before you close the page, we need to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and we think right now is the most critical moment in that two-century history.
We’re not asking for charity. If you to Scientific American, you can help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both future and working scientists at a time when the value of science itself often goes unrecognized. .
Mariana Lenharo is a life sciences reporter at Nature. Follow Lenharo on Twitter @marilenharo
First published in 1869, Nature is the world’s leading multidisciplinary science journal. Nature publishes the finest peer-reviewed research that drives ground-breaking discovery, and is read by thought-leaders and decision-makers around the world.
Source: www.scientificamerican.com