Dr. Wanja Wiese of Ruhr University Bochum has studied the conditions necessary for consciousness and says that at least one of them is missing in computers. In a new paper, she argues that differences in the causal structure of the brain and computers may be related to consciousness, and that current AI systems are not conscious.
A new paper explores the necessary conditions for consciousness, highlighting key differences between brains and computers, particularly in their causal structure, and suggesting that truly conscious experience may require more than mere simulation.
In a new paper, Wanja Wiese explores what conditions must be met for consciousness to exist, at least one of which cannot be found by a computer.
Would it be desirable for artificial intelligence to develop consciousness? Not for a variety of reasons, according to Dr. Vanja Wiese of the Second Institute of Philosophy at the Ruhr University Bochum in Germany. In her essay, Dr. Wiese considers the conditions that must be met for consciousness to exist and compares brains to computers. Dr. Wiese confirms that there are notable differences between humans and machines, especially in the organization of brain regions, memory, and computing units. “Causal structure may be a difference that is relevant for consciousness,” Dr. Wiese argues. The essay was published in the journal Philosophical Studies on June 26, 2024.
Two different approaches
There are at least two different approaches to considering the possibility of consciousness in artificial systems. One approach asks how likely it is that current AI systems are conscious, and what needs to be added to existing systems to increase their chances of being conscious. The other approach asks what kinds of AI systems are unlikely to be conscious, and how we can eliminate the possibility of certain kinds of systems being conscious.
Wanja Wiese explores the differences between computers and the brain. Photo: RUB, Marquard
Wanja Wiese pursues the second approach in his research. “My aim is to contribute to two goals: first, to reduce the risk of inadvertently creating artificial consciousness, which is a desirable outcome, since it is currently unclear under what conditions it would be morally permissible to create artificial consciousness. Secondly, this approach should help to rule out deception by ostensibly conscious AI systems that only appear to be conscious,” he explains. This is especially important since there are already indications that many people who frequently interact with chatbots believe that these systems are conscious. At the same time, the expert consensus is that current AI systems are not conscious.
Free Energy Principle
In his essay, Wiese asks: “How do we know, for example, whether there are necessary conditions for consciousness that classical computers cannot satisfy? The common feature of all conscious animals is that they are alive. But being alive is such a demanding requirement that many people don’t consider it a reasonable candidate for consciousness. But perhaps some of the conditions necessary for life are also necessary for consciousness.”
In the article, Wanja Wiese refers to British neuroscientist Carl Friston’s free energy principle, which suggests that the processes that ensure the survival of self-organizing systems like living organisms can be described as a type of information processing. In humans, this includes the processes that regulate vital parameters such as body temperature, oxygen content in the blood, and blood sugar levels. The same type of information processing can also be achieved by a computer. However, a computer does not regulate body temperature or blood sugar levels, it only simulates these processes.
Most differences have nothing to do with consciousness
Researchers suggest that the same may be true for consciousness. If we assume that consciousness contributes to the survival of a conscious organism, then the free energy principle dictates that physiological processes that contribute to the maintenance of the organism should retain the traces that conscious experience leaves behind and be explainable as information-processing processes. This can be called the “computational correlate of consciousness,” which can also be achieved by computers. However, additional conditions may need to be met by the computer in order for it to reproduce, not just simulate, conscious experiences.
In his paper, Wanja Wiese analyzes the differences between how conscious organisms realize the computational counterpart of consciousness and how computers realize it in simulations. He argues that most of these differences have nothing to do with consciousness. For example, unlike electronic computers, our brains are very energy-efficient. But this cannot be a requirement for consciousness.
But another difference lies in the causal structure of computers and the brain. In a classical computer, data is always first loaded from memory, then processed by the central processing unit, and finally stored back into memory. The brain has no such separation, so the causal connections of different brain regions take different forms. Wanja Wiese argues that this may be a difference between the brain and classical computers that is relevant to consciousness.
“In my view, the perspective provided by the free energy principle is particularly interesting, because it allows us to describe properties of conscious organisms that are in principle realizable in artificial systems, but that do not exist in many types of artificial systems (e.g. computer simulations),” explains Wanja Wiese. “This means that we can capture the prerequisites for consciousness in artificial systems in much more detail and precision.”
Reference: “Artificial Consciousness: A Free Energy Perspective” by Wanja Wiese, June 26, 2024, Philosophical Studies.
DOI: 10.1007/s11098-024-02182-y