Consciousness in mixed systems: merging artificial and biological minds via Brain-Machine Interface
Here the poster I presented at the Toward a Science of Consciousness 2010 conference. I propose a new approach for building sensory prostheses. Neuroprostheses that are developed today evoke sensory experiences by stimulating the brain. Brain produces visual, tactile or auditory experiences in response to stimulation. In proposed model, experiences are not generated by brain – they are produced by electronic device that is capable of conscious experiences – “qualia” – by itself. The conscious experiences of this device and biological brain are then merged through special kind of brain-machine interface.
Motor prostheses and intelligence augmentation are mentioned too ;)
The rapidly developing field of Brain-Machine Interface (BMI) technology seeks to establish a direct communication-and-control channel between human brain and machines. Practical applications for BMI include restoration of lost vision and motor functions, and even extending normal human capabilities. But unfortunately current BMI systems are far too poor to achieve even a level of performance that is comparable to what humans are normally capable of, let alone improving it. And this situation holds on for quite a while. The possible solution for coming out is to move research focus to those aspects of brain-machine interaction that usually do not receive much attention.
The study of consciousness is one of such important aspects, as this poster seeks to prove, that could eventually allow us to bring BMI technology to the advanced stages, making its capabilities closer to capabilities of those BMI devices that appear in science fiction. Understanding consciousness and how it arises from the brain is crucial for achieving that goal.
And BMI technology itself provides a lot of new questions and opportunities for consciousness research. BMI can progress far enough to allow such levels of integration between artificial devices and biological neural networks that they could work as a single system, not just separate entities communicating between each other. But how consciousness can then be represented in this mixed system? Will consciousness be privilege of living part only? Can the artificial part add something to conscious experience or even expand it? Furthermore, it would be possible to integrate neural systems of different living organisms by interfacing them to single artificial network. Will their consciousness be integrated then too? And how can such integrated mind be experienced?
This poster explores ways in which Brain-Machine Interfaces can contribute to consciousness research, and discusses how better understanding of consciousness in context of brain-machine interaction will allow us to build BMI systems with extended capabilities.
Following are some info in a text format (thanks to images.google.com for illustrations ;)
Neuroprostheses for body enhancement: the limitations
Today almost everyone knows that motor prostheses are developed that can be controlled by thought alone. How do they work? Different designs are employed, but a great portion of such devices rely on fact that part of brain cortex – primary motor cortex – contains direct representation of our body, called homunculus.
If we place sensors at this homunculus, then at the moment when e.g. hand moves, movement-related activity can be captured by sensors located in hand area. From this activity, we can extract direction of movement, velocity and etc. Therefore sensors placed on primary motor cortex can be used to control an artificial body.
Seemingly, this approach can be useful for disabled – paralysed people, or those who lost their limbs – but has not any potential for human enhancement. How we’re going to add a third limb, in particular? It is not represented in homunculus! However, the latter is possible thanks to brain plasticity. Experiments have demonstrated that if we connect artificial limb to monkey’s cortex and then train animal to control it, some neurons will change their function during training. These neurons will be tuned specifically to the movement of robotic limb. Activity of these neurons will predict movement parameters for robotic but not monkey’s own hand.
So new body parts can be incorporated into brain’s body schema. But anyhow, this plasticity should have its own limits – we can imagine learning to control one, two or three hands, but what about one hundred? This can turn out to be too much payload for the brain.
Another concern is to what extent BMI will interfere with other brain functions? Some BMI use mental tasks such as imagination for control. With this approach, even auditory cortex can be used to control a prosthesis. However, it is not clear to what extent such BMIs occupy cognitive resources. If we need to concentrate on our body each time we need to make a movement, this cannot be called an enhancement. Our bodies do not operate that way – once we learned to walk, we do not pay attention to this process anymore.
And what if totally different body is needed? Here the vision from Gerwin Schalk:
…imagine a jet pilot who currently has to deal with many controls for the many degrees of freedom the airplane supports. Because the number of degrees of freedom of the airplane exceeds the degrees of freedom of our motor system (or at least is very inadequately matched to it), the jet pilot might have to operate specific functions in sequence rather than in parallel. Using direct communication from the brain, the degrees of freedom that the pilot can support could be matched to the degrees of freedom of the airplane, which would transform the airplane from an external tool to a direct extension of the pilot’s nervous system, in which different areas of the pilot’s motor system would be responsible for controlling movements of the airplane rather than movements of the pilot’s limbs. In addition, sensors in the plane could be connected to the brain’s sensory areas such that these measurements can provide the pilot with information about the current state of the plane, much in the same way that our bodily sensors provide us with comprehensive information about the state of our body.
In this case, brain have to learn to use a new, totally different from human, body, leaving at the same time old body model (or, in neurologic terms, body schema) intact – to use it while not on airplane. Here we’ll hit the limits of brain plasticity much faster.
All these questions make us doubt that it is possible to built an advanced body prosthesis that will rely only on intrinsic brain capabilities.
Development of motor prosthesis that provide tactile and proprioceptive sensations is ongoing. Here we also encounter a homunculus – but that time located in somatosensory cortex. When areas of this homunculus are stimulated, sensations are felt. Straightforward way to add sensations therefore would be by stimulation of somatosensory cortex. But then we encounter the same problem as with motor homunculus – inability to acquire sensations from additional body parts or different bodies.
As we have seen, brain can turn out to be too limited to manage a body – or several – more advanced than human. For this task a next-generation neuroprosthesis can itself contain additional artificial neural substrate (or some other kind of AI). This looks reasonable – even brain has two separate hemispheres for managing each side of the body.
If one hemisphere is not working properly, control and sensation in corresponding side may deteriorate. So each hemisphere contains neural assemblies that gives rise to different sensations – feelings of touch, proprioception, temperature and pain, – as well as assemblies for planning and executing movements. The new neuroprosthesis will contain similar components that will take care of movement control and sensations in artificial limbs.
There is, however, one peculiarity to realization of these components, especially “sensing” ones. Consider living organisms, including ourselves: we only feel something because neural activity in our brains somehow produces conscious/phenomenological experience, or qualia. Brain not only processes sensory input and calculates appropriate behaviors based on it, but also enables us to experience what is going on. But not everything: we can feel not all of sensory input received and processed by brain, even if this input can drastically affect our behavior. Priming is a good example of this.
Fortunately, a rather good chunk of sensory information finally enters consciousness. How it happens is not yet very well understood, but it would be reasonable to assume that some specific properties of structure and activity in CNS is responsible for this. Several theories exist aiming to explain what properties can produce consciousness in biological or artificial systems, but they are all of speculative character yet.
What is interesting here is that sensory components of our neuroprosthesis should also be capable of producing conscious experiences by themselves, so that we could feel artificial body. The main difference from current neuroprostheses that provide sensory feedback by stimulating different parts of the cortex is that latter is based on already existing neural substrate that can produce conscious experiences when stimulated. In proposed model, we are not using precious resources of existing substrate (because they may be limited), but rather add new resources by adding our own components. But then these components should be developed with support for generating qualia, like their biological counterparts.
A possible design for conscious brain implant ;)
After that a connection needs to be created between brain and “conscious” neuroprosthesis so that their qualia will be united into single conscious experience. Forunately, we have an example solution for this task in nature – corpus calossum, a bundle of nerve fibers connecting two brain hemispheres. When this connection is broken, each hemisphere appears to have its own, separate mind. When bundled together, hemispheres are working as a single system, and their conscious experience is also merged into single, coherent one.
Modern sensory prosthesis do not differ too much in their capabilities from previously discussed motor prosthesis. They also rely on stimulating neurons inside the CNS. This stimulation targets mostly peripheral parts, and therefore does not even produce qualitative experience by itself. Brain just uses this stimulation as a kind of sensory input, and this input goes through same channels that were established for senses we already have. Therefore, such approach again limits us to the same six senses we already possess. But with conscious implant, it is possible to artificially create a new qualia, totally different from whatever we experienced before.
Another thing that can possible with new type of neuroprosthesis is broadening of existing senses. Imagine the folowing scenario: cameras for artificial vision are placed anywhere in the world, and we can see all what is happening in front of them simultaneously. We could also simulate this by placing images from all cameras on one big screen, but then our brain resources would be clearly not enough to perceive all of them at once.
A conscious neuroprosthesis brings its own resources into the game. It should make scenario described above possible.
From all potential applications of BMI technology described by science fiction one of the most impressive is direct writing of new memories into the brain, which would eliminate the need for learning. However, current status of BMI technology does not offer a hope for this dream to come true. The principal obstacle is that brain stores memories as patterns of connections between neurons, but how information is represented by these patterns, e.g. “neural code”, is not yet understood. But it is known already that these patterns are highly distributed – and this would be hard from technical point of view to alter connections everywhere throughout the brain. Additionally, the same neural networks can store different memories, so it is hard to add new memories without damaging those already stored in the brain.
With a new model, the information is kept outside the brain, but can be still perceived as a part of our own memory. Adding new memory requires much less interference with biological brain, because memory is stored separately.
I’ll talk more on this topic at the upcoming Humanity+ Summit 2010 – hope to see you there ;)