Can a Machine Be Conscious?

Credit to Author: Daniel Oberhaus| Date: Wed, 01 Nov 2017 15:00:00 +0000

The term ‘artificial intelligence’ gets thrown around so much these days that it can sometimes feel meaningless. Generally speaking, AI tends to refer to applications of neural networks, a type of computing architecture loosely modeled after the brain. In the past year, I’ve reported on neural networks that beat a human at Go for the first time, taught themselves how to play Go and then beat the computer that beat the human, learned how to break CAPTCHAs, and write clickbait.

The rapid pace of neural network development can make it seem like we’re on the cusp of creating a general AI, a computer that is not just better than humans at one specific task like Go, but at almost all tasks a human can do. Futurists like Ray Kurzweil think we could see an AI like this within a few decades, while others think it’ll never be possible. But if a general AI is ever created, it will raise a profoundly unsettling question: will that machine be conscious in the same way you and I are conscious?

According to an article written by an international team of neuroscientists and psychologists published last Thursday in Science, neural networks’ prowess at things like Go and breaking CAPTCHAs represents machine mastery of mostly unconscious mental processes in humans. Nevertheless, these researchers think that it may be possible to create artificial consciousness by “investigating the architectures that allow the human brain to generate consciousness, and then transferring those insights into computer algorithms.”

In other words, part of the reason that the discussion about whether AI can achieve human-like consciousness arises from the notion that consciousness defies definition—it doesn’t seem like it can be reduced to a series of operations. These researchers argue otherwise. They think that the way neurons in the human brain interact to give rise to consciousness can be mapped, and that if a computer can emulate these neural structures algorithmically, it may give rise to artificial consciousness.

Read more: This Machine Kills CAPTCHAs

“Centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions,” the researchers wrote in Science. But “the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.”

It’s a pretty far out concept, but before any progress in that direction is made, the researchers accede it will first be necessary to define what we’re talking about when we talk about ‘consciousness.’

The nature of consciousness has been debated ad infinitum by philosophers for millennia, but within the last century or so, computer scientists have also started weighing in on the conversation. Indeed, the father of digital computing Alan Turing’s famous paper on machine intelligence, in which he outlined a test for differentiating a human from an artificial intelligence, was first published in Mind , a leading journal of philosophy.

Turing considered the brain to just be a very powerful computer. It wasn’t inconceivable to him that a computer could be created that mimicked human mental processes to the point that a real human would be unable to tell if it were talking to a machine or not. But Turing was also aware that mimicking human behavior and intelligence is not the same as consciousness.

In 1949, one of Turing’s contemporaries, the British neuroscientist Geoffrey Jefferson, delivered a famous lecture at the Royal College of Surgeons about the “Mind of Mechanical Man.” In this lecture, Jefferson claimed that “until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.”

Today, machines can write poems and songs, but it is unlikely that Jefferson would grant that they are conscious.

Most of us, at some gut level, would probably feel inclined to agree with Jefferson. Sure, a machine recently beat a human at Go, but it didn’t feel the same sort of pleasure from the victory that its human competitor Lee Sedol might. Yet for Turing, Jefferson’s “argument from consciousness” is insufficient. After all, the only way to know for sure that a machine is or is not feeling something is to be the machine and feel oneself feeling. And even if you conveyed these feelings to the world, how would anyone know for sure that you weren’t just a machine?

Today, machines can write poems and songs, but it is unlikely that Jefferson would grant that they are conscious. So then, if we can’t ever feel what a machine may or may not be feeling, how are we to define consciousness in such a way that we can actually speak of conscious machines? According to the neuroscientists writing in the recent Science article, this conscious machine would possess at least two essential dimensions: global availability and self-monitoring.

Global monitoring refers to the relationship between a specific object of thought (for example, the mental representation of the cat sitting on my desk) and the brain. For a machine to be considered conscious, an object of thought must be globally available to the entire system insofar as it can be acted upon, recalled at will or spoken about. The second feature—self-monitoring—is reflexive. A conscious machine must be able to obtain information about itself.

Obviously these two traits are not sufficient for consciousness, but the researchers argue that they are necessary. The question is whether the neural mechanisms that underlie these aspects of consciousness can be located in the brain so that they can be replicated in transistors.

According to the neuroscientists, most of the human brain’s intelligence lies in unconscious processing. This observation is based on a collection of studies in which images and sounds have been subliminally presented to research subjects while they’re brains were being imaged in order to see how the brain processed the information. For example, researchers found that the brain processes the word “four” more efficiently when that word is preceded with the number “4,” even if the subject was unaware of seeing the number—a phenomenon known as priming.

“Neuroimaging methods reveal that the vast majority of brain areas can be activated nonconsciously,” the researchers wrote.

The researchers argue that the brain can be conceived as a collection of specialized modules that mostly operate unconsciously. Nevertheless, a conscious human, machine or animal must be able to aggregate all of these different processes and decided on a single best course of action based on the information received from these various modules. The researchers argue that whatever thought comes out on top and guides the disparate modules as unified whole (in other words, is globally available) can be considered a conscious thought.

Colloquially, this is sort of what is meant when someone can be said to be “paying attention” to a particular thing, out of all the thoughts they could be thinking. The mechanisms of attention have been well studied in neuroscience and like many of the other mental processes, they can operate unconsciously.

“What we call attention is a hierarchical system of sieves that operate unconsciously,” the researchers write. “Such unconscious systems compute with probability distributions, but only a single sample…becomes conscious at a given time.”

In other words, the human brain has created a sort of consciousness bottleneck, where it can only be conscious of one thought at a given time. This bottleneck corresponds to a network of neurons located in the cortex, the outer layer of the brain, and dozens of neuroimaging studies have shown how this network functions when a subject consciously perceives something like a person’s face, for instance.

The second aspect of consciousness, the ability to think about yourself thinking, has also been well studied in neuroscience. For instance, when humans make a decision, they also estimate their degree of confidence that this decision is the correct one, whether or not they are conscious of this estimation. This confidence measurement has been associated with neural networks in the prefrontal cortex region of the brain in numerous studies.

“If these researchers are right, progress in these fields means that conscious machines are in the realm of possibility.”

Error detection and correction is also another hallmark of this reflexive aspect of consciousness. Based on studies using electroencephalography to monitor brainwave activity in the prefrontal cortex, the human brain is remarkably quick to recognize when a decision it was made was in error, even before any feedback is received on this decision. According to the researchers, this quick error detection mechanism might be attributed to the presence of two parallel neural circuits in the brain, a high-level ‘intention’ circuit and a low-level sensory-motor circuit that signal an error to the brain whenever the circuits fall out of sync.

Integrating these two aspects of consciousness in a machine is no small task. Although machines have proven to have superhuman abilities in narrowly defined tasks (like playing Go), integrating several tasks into one machine has been slow coming. Applying the brain’s own mechanisms for implementing global availability may help solve this problem. For instance, a new computing architecture called Pathnet has deployed an algorithm that allows the system to determine which path through a collection of specialized neural networks is best for a given task.

The researchers argue that most machine-learning systems today, such as those deployed in self-driving cars, lack self-monitoring. Endowing a computer system with an “integrated image of itself” so that it knows it has a GPS map that can locate gas stations, for instance, in addition to how much gas it has left, its current speed and the like, would push the system toward consciousness.

According to the researchers, a machine endowed with global availability and self-monitoring aspects would behave as though it were conscious. “For instance, it would know that it is seeing something, would express confidence in it, report it to others could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans,” the researchers wrote.

Although this doesn’t really address the experience of being conscious, the researchers note that in humans the loss of global availability and self-monitoring mechanisms “covaries with a loss of subjective experience.”

Neither neuroscience nor computer science are at a stage where they can completely describe, much less algorithmically replicate, the structures in the brain that give rise to the phenomenon of consciousness. But if these researchers are right, progress in these fields means that conscious machines are in the realm of possibility. And I, for one, welcome our future artificially conscious overlords.

https://motherboard.vice.com/en_us/rss