For decades, some of engineering’s best minds have focused their thinking skills on how to create thinking machines — computers capable of emulating human intelligence.
While some of thinking machines have mastered specific narrow skills — playing chess, for instance — general-purpose artificial intelligence (AI) has remained elusive.
Part of the problem, some experts now believe, is that artificial brains have been designed without much attention to real ones. Pioneers of artificial intelligence approached thinking the way that aeronautical engineers approached flying without much learning from birds. It has turned out, though, that the secrets about how living brains work may offer the best guide to engineering the artificial variety. Discovering those secrets by reverse-engineering the brain promises enormous opportunities for reproducing intelligence the way assembly lines spit out cars or computers.
Figuring out how the brain works will offer rewards beyond building smarter computers. Advances gained from studying the brain may in return pay dividends for the brain itself. Understanding its methods will enable engineers to simulate its activities, leading to deeper insights about how and why the brain works and fails. Such simulations will offer more precise methods for testing potential biotechnology solutions to brain disorders, such as drugs or neural implants. Neurological disorders may someday be circumvented by technological innovations that allow wiring of new materials into our bodies to do the jobs of lost or damaged nerve cells. Implanted electronic devices could help victims of dementia to remember, blind people to see, and crippled people to walk.
Sophisticated computer simulations could also be used in many other applications. Simulating the interactions of proteins in cells would be a novel way of designing and testing drugs, for instance. And simulation capacity will be helpful beyond biology, perhaps in forecasting the impact of earthquakes in ways that would help guide evacuation and recovery plans.
Much of this power to simulate reality effectively will come from increased computing capability rooted in the reverse-engineering of the brain. Learning from how the brain itself learns, researchers will likely improve knowledge of how to design computing devices that process multiple streams of information in parallel, rather than the one-step-at-a-time approach of the basic PC. Another feature of real brains is the vast connectivity of nerve cells, the biological equivalent of computer signaling switches. While nerve cells typically form tens of thousands of connections with their neighbors, traditional computer switches typically possess only two or three. AI systems attempting to replicate human abilities, such as vision, are now being developed with more, and more complex, connections.
Already, some applications using artificial intelligence have benefited from simulations based on brain reverse-engineering. Examples include AI algorithms used in speech recognition and in machine vision systems in automated factories. More advanced AI software should in the future be able to guide devices that can enter the body to perform medical diagnoses and treatments.
Of potentially even greater impact on human health and well-being is the use of new AI insights for repairing broken brains. Damage from injury or disease to the hippocampus, a brain structure important for learning and memory, can disrupt the proper electrical signaling between nerve cells that is needed for forming and recalling memories. With knowledge of the proper signaling patterns in healthy brains, engineers have begun to design computer chips that mimic the brain’s own communication skills. Such chips could be useful in cases where healthy brain tissue is starved for information because of the barrier imposed by damaged tissue. In principle, signals from the healthy tissue could be recorded by an implantable chip, which would then generate new signals to bypass the damage. Such an electronic alternate signaling route could help restore normal memory skills to an impaired brain that otherwise could not form them.
“Neural prostheses” have already been put to use in the form of cochlear implants to treat hearing loss and stimulating electrodes to treat Parkinson’s disease. Progress has also been made in developing “artificial retinas,” light-sensitive chips that could help restore vision.
Even more ambitious programs are underway for systems to control artificial limbs. Engineers envision computerized implants capable of receiving the signals from thousands of the brain’s nerve cells and then wirelessly transmitting that information to an interface device that would decode the brain’s intentions. The interface could then send signals to an artificial limb, or even directly to nerves and muscles, giving directions for implementing the desired movements.
Other research has explored, with some success, implants that could literally read the thoughts of immobilized patients and signal an external computer, giving people unable to speak or even move a way to communicate with the outside world.
The progress so far is impressive. But to fully realize the brain’s potential to teach us how to make machines learn and think, further advances are needed in the technology for understanding the brain in the first place. Modern noninvasive methods for simultaneously measuring the activity of many brain cells have provided a major boost in that direction, but details of the brain’s secret communication code remain to be deciphered. Nerve cells communicate by firing electrical pulses that release small molecules called neurotransmitters, chemical messengers that hop from one nerve cell to a neighbor, inducing the neighbor to fire a signal of its own (or, in some cases, inhibiting the neighbor from sending signals). Because each nerve cell receives messages from tens of thousands of others, and circuits of nerve cells link up in complex networks, it is extremely difficult to completely trace the signaling pathways.
Furthermore, the code itself is complex — nerve cells fire at different rates, depending on the sum of incoming messages. Sometimes the signaling is generated in rapid-fire bursts; sometimes it is more leisurely. And much of mental function seems based on the firing of multiple nerve cells around the brain in synchrony. Teasing out and analyzing all the complexities of nerve cell signals, their dynamics, pathways, and feedback loops, presents a major challenge.
Today’s computers have electronic logic gates that are either on or off, but if engineers could replicate neurons’ ability to assume various levels of excitation, they could create much more powerful computing machines. Success toward fully understanding brain activity will, in any case, open new avenues for deeper understanding of the basis for intelligence and even consciousness, no doubt providing engineers with insight into even grander accomplishments for enhancing the joy of living.
References
Berger, T.W., et al. Restoring Lost Cognitive Function,” IEEE Engineering in Medicine and Biology Magazine (September/October 2005), pp. 30-44.
Griffith, A. 2007. Chipping In,” Scientific American (February 2007), pp. 18-20.
Handelman, S. The Memory Hacker,” Popular Science (2007).
Hapgood, F. Reverse-Engineering the Brain,” MIT News Magazine (July 1, 2006).
Lebedev, M.A. and Miguel A.L. Nicolelis. Brain-machine interfaces: Past, present, and future,” Trends in Neurosciences 29 (September 2006), pp. 536-546.