The Brain is not a Computer.

| May 18, 2006 | 31 Replies

How often do you hear someone say that the brain is a computer?  This statement is not literally true. The brain is certainly not like a desktop computer. Brains don’t look like computers; there’s no CPU in the head.  Neurons aren’t all wired together to an executive control center.  Human brains have a massively parallel architecture. Cognitive scientists who have carefully thought through this issue arrive at this same conclusion:  the brain does not really resemble a computer, certainly not any sort of computer in general use today.

The brain as computer is a seductive metaphor. According to Edwin Hutchins, “The last 30 years of cognitive science can be seen as attempts to remake the person in the image of the computer.” See Cognition in the Wild (1996).

Metaphors are models, however, and models are imperfect versions of the reality they portray.  Metaphors accentuate certain parts of reality while downplaying other parts. 

Unfortunately, many people “reify” the brain-as-computer metaphor: they accept this metaphor as literal truth, leading to various misunderstandings about human cognition.

Here’s another big difference between brains and computers: human cognition is fault-tolerant and robust.  In other words, our minds continue to function even when the information is incomplete (e.g., while we’re driving in the rain) or when our purposes or options are unclear (e.g., navigating a cocktail party).  Computers, on the other hand, are always one line of code away from freezing up. 

In Bright Air, Brilliant Fire:  On the Matter of the Mind (1992) Gerald M. Edelman writes that “The world is not a piece of [computer] tape . . . and the brain is not a computer.”  Brain as computer invites rampant functionalism: that any old hardware will do.  I could implement my own cognition on any other piece of hardware. 

Just because brains and desktop computers often arrive at similar results, though, doesn’t mean that the brain works like a computer. Edelman also points out people often believe that there are computer-like rules that govern thoughts, that the brain thinks by manipulating context-free symbols according to some sort of “rules” that have yet to be specified.  To have any sort of “rules,” though, there must first be uncontested “facts.”  But there is no such thing as context-free facts.  Perhaps there could be if people used identical methods of categorizing the world. Contrary to what many people believe, however, human categorization does not occur by use of necessary and sufficient conditions.  See Cognitive Psychology:  An Overview for Cognitive Scientists, by Larry Barsalou (1992) and Women, Fire, and Dangerous Things, by Lakoff, George (1987).  The world is unlabeled. Without pre-labeled “things,” computers flounder.  Human brains are different.  They thrive primarily on pattern matching, something with which computers struggle.  See What Computers Still Can’t Do, by Hubert L. Dreyfus (1992).

Scott Kelso points out that the brain is not a computer that manipulates symbols. “The nervous system may act as if it were performing Boolean functions . . . People can be calculating, but the brain does not calculate.”  See Dynamic Patterns (1995). Even those who believe that the brain is (an extremely sophisticated) machine, cognitive scientists such as Patricia Churchland, warn us to handle the computer metaphor with extreme caution. We are pattern matchers and pattern completers. Neurophilosophy: Toward a Unified Science of the Mind/Brain, Patricia Smith Churchland (1986).

As Andy Clark points out, we are great at Frisbee, but bad at math.  See Being There: Putting Brain, Body and World Together Again, by Andy Clark (1997).  Clark suggests that a better understanding of the brain is that it is a complex ever-evolving control system, connecting brain, body and world.  Paul Churchland also notes that we are horrible at logic and other types of systematic thinking.  How many years do we study math, but look how we still struggle as adults!  If the brain were a computer, this would not be the case. The Engine of Reason, the Seat of the Soul, Paul M. Churchland (1995).  

The “frame problem” is another proof that brains are not like computer. We can almost instantly bring relevant information to bear.  No computer can do this like human brains.

Because of these many problems, William Bechtel concludes that the brain as computer metaphor is now dated: “[T]he inspiration for developing accounts of how cognition works is no longer the digital computer; instead, knowledge about how the brain works increasingly provides the foundation for theoretical modeling.” A Companion to Cognitive Science,” ed. by W. Bechtel and G. Graham (1998).

Why does it matter whether we ignore all of this evidence and insist that the brain is a computer?  Here are some reasons:

  1. The brain-as-computer metaphor sees the brain as hardware, insisting that the all people for whom meaning can be shared do so by manipulating the same symbols in their heads based on the same “rules.”  This view overlooks the tremendously complex and idiosyncratic wiring that makes your brain different than mine.  As though that lifetime of wiring and pruning of tens of billions of neural connections wasn’t integral to you being you!  As though there isn’t a critical connection between that three-pound wet “computer” in your head and your body!
  2. Insisting that brains are computers makes brains commodities, thereby denigrating the sanctity and idiosyncratic history of each individual.
  3. The brain as computer fails to explain how words can have meaning.  What do symbols in the head ultimately refer to?  More symbols?  That’s a non-starter.  An alternate approach to cognition, embodied cognition gives word meaning roots.  http://dangerousintersection.org/?p=177
  4. Because the brain-as-computer metaphor sees thinking as symbol-manipulation in the head, it fails to explain the connection between world, body and cognition.  It also ignores the well-established interplay between emotion and rationality.  http://dangerousintersection.org/?p=146
  5. The brain as computer metaphor can erroneously lead to a belief in disembodied thought, along with related mischief, such as the possibility of the fully-functioning disembodied soul.  
  6. None of the above is to deny that the brain can sometimes be seen, for limited purposes, to be like a computer.  This comparison can is be fun and sometimes useful, but we must be careful that we don’t reify the brain-as-computer metaphor. Why? Because the brain is not a computer.

Signed,
A Head in a Jar

Share

Tags: , ,

Category: Language, Psychology Cognition, Science

About the Author ()

Erich Vieth is an attorney focusing on consumer law litigation and appellate practice. He is also a working musician and a writer, having founded Dangerous Intersection in 2006. Erich lives in the Shaw Neighborhood of St. Louis, Missouri, where he lives half-time with his two extraordinary daughters.

Comments (31)

Trackback URL | Comments RSS Feed

  1. Dan Klarmann says:

    This latest list of distinctions is largely false unless one narrowly defines "computer" as "commercial, digital, serial computer" to distinguish it from the many analog and parallel computer systems that have been in use (Scratch #'s 1,3,4, & 5).

    Also, there is a distinction between hardware and software in brains, as any fMRI specialist can tell you (#6).

    And synapses are more complex than simple binary silicon logic gates, but not more than some other technologies under development (#7).

    "Brains have bodies"? As opposed to autopilots having planes, or self-navigating vehicles?

    I could quibble philosophically about "self-organizing," as I know how dependent computers are on computers to design each iteration of their evolution. But brains do literally rewire themselves as they learn, most markedly in the first couple of dozen years, for humans.

    The overlap in function of memory and calculation is also no longer unique, as Tilden demonstrated over a decade ago.

  2. Jim Razinha says:

    I guess a Jeopardy win isn't sufficient demonstration of massive parallelism. I got to play with a huge analog computer some 30 years ago – it was an odd incarnation; I know that modern analogs are far more sophisticated. Anyway, about the same time as that fun experiment, I came across a dictionary published in the 1930s. I remember looking up "computer": one who computes.

    {Update} – I didn't see until after my comment that article linked was from 2007, so the Jeopardy win would be unknown, but massive parallelism was not – I even participated in SETI by downloading a screensaver (from their site) that used the screensaver downtime to aid in processing the huge streams of data.

  3. Erich Vieth says:

    From a Reddit post here: http://www.reddit.com/r/askscience/comments/mouyg/if_the_human_brain_were_a_computer_what_would_its/

    The architecture on which a modern PC is based is known as the Von Neuman architecture. Such systems have a stored set of instructions (the program), a memory (containing data), and a processor that fetches the instructions and applies them to the data.
    The instructions and the data are conceptually different, and the instructions are executed one at a time in sequence (with some exceptions in modern CPUs, see below), with the result of the operations written to memory too. Turing outlined the power of such a machine and Von Neuman invented the practical architecture.
    The human brain is so completely unlike this architecture it cannot be overstated. First of all consider practical differences. In a PC, programming languages are used to abstract from physical machine operations into a higher level that is easier for the human coders to work with. Then these instructions are compiled to the logical instructions, and then the whole program is run at once.
    The human brain has none of these steps. There is no separation between instructions and data, no need for a higher-level language (because there’s no programmer), no compilation process. Input and output are happening continuously, and any “modifications to the program” must be made on the fly.
    Biology has shown us that the synaptic connections between our neurons allow each neuron to approximate a very simple function, summing inputs and transforming them to a different output. All of these simple functions working in parallel somehow allow our brains to represent extremely complex functions. The research in AI in this area began in the 50s, initially called Parallel Distributed Processing, later Artificial Neural Networks. Relevant wikipedia: pdp, ann
    These systems (although implemented on a traditional PC) attempted to emulate individual neurons in software and combine them to achieve parallel processing somewhat analogous to the human brain. They were only very crude approximations of the biological neuron, but it’s a huge breakthrough because it’s the first time we even approximated the computing paradigm of biological neural networks.
    Recently, more complex and biologically accurate neural network models have been created, (and also )and neurobiologists even use computational modelling in their research. These models share some of the features of human cognition, such as distributed representations (no single point of failure), learning, and plasticity.
    the very most recent kind

  4. Erich Vieth says:

    From Sam Harris’ interview of David Krakauer, who is President and William H. Miller Professor of Complex Systems at the Santa Fe Institute.

    They said perhaps what Alan Turing did in his paper on intelligent machinery has given us the mathematical machinery for understanding the brain itself. At that point, it became a metaphor. John Von Neumann himself realized it was a metaphor, but he thought it was very powerful. So that’s the history. Now, back into the present. As you point out, there is a tendency to be a bit, you know, epistemologically narcissistic. We tend to use whatever current model we use and project that onto the natural world as almost the best-fitting template for how it operates.

    Here is the value, or the utility and disutility, of the concept. The value of what Turing and Von Neumann did was to give us a framework for starting to understand how a problem-solving machine could operate. We didn’t really have in our mind’s eye an understanding of how that could work, and they gave us a model of how it could work. For many reasons, some of which you’ve mentioned, the model is highly imperfect. Computers are not robust. If I stick a pencil in your CPU, your machine will stop working. But I can sever the two hemispheres of the brain, and you can still function. You’re very efficient. Your brain consumes about 20% of the energy of your body, which is about 20 watts. It’s 20% of a lightbulb. Your laptop consumes about that, and has, you know, some tiny fraction of your power. And they’re highly connected. The neurons are densely wired, whereas that’s not true of computer circuits, which are only locally wired. Most important, the brain is constantly rewiring and adapting based on inputs, and your computer is not.

    So we know the ways in which it’s not the same. But as I say, it’s useful as a full experiment for how the brain might operate. That’s the computer term. Now let’s take the information term. That magazine article you mentioned is criticizing the information concept, not the computer concept—which is limited, and we all agree, but the information concept is not, right? So we’ve already determined what information is mathematically. It’s the reduction of uncertainty. Think about your visual system: When you open your eyes in the morning and you don’t know what’s out there in the world, electromagnetic energy, which is transduced by photoreceptors in your retina and then transmitted through the visual cortex, allows you to know something about the world that you did not know before.

    It’s like going from the billiard balls all over the table to the billiard balls in a particular configuration. Very formally speaking, you have reduced the uncertainty about the world. You’ve increased information, and it turns out you can measure that mathematically. The extent to which that’s useful is proved by neuro-prosthetics. The information theory of the brain allows us to build cochlear implants. It allows us to control robotic limbs with our brains. So it’s not a metaphor. It’s a deep mathematical principle. It’s a principle that allows us to understand how brains operate and reengineer it. I think the article is so utterly confused that it’s almost not worth attending to.

    Now, that’s information. Information processing: If that’s synonymous in your vocabulary with computing in the Turing sense, then you and I just agreed that it’s not right. But if information processing is what you do with Shannon information, for example, to transduce electromagnetic impulses into electrical firing patterns in the brain, then it’s absolutely applicable—and how you store it, and how you combine information sources. When I see an orange, it’s orange color, and it’s also a sphere. I have tactile, mechanical impulses, and I have visual electromagnetic impulses. In my brain, they’re combined into a coherent representation of an object in the world. The coherent representation is in the form of an informational language of spiking. It’s extraordinarily useful. It has allowed us to engineer biologically mimetic architectures, and it’s made a huge difference in the lives of many individuals who have been born with severe disabilities.

Leave a Reply