How often do you hear someone say that the brain is a computer? This statement is not literally true. The brain is certainly not like a desktop computer. Brains don’t look like computers; there’s no CPU in the head. Neurons aren’t all wired together to an executive control center. Human brains have a massively parallel architecture. Cognitive scientists who have carefully thought through this issue arrive at this same conclusion: the brain does not really resemble a computer, certainly not any sort of computer in general use today.
The brain as computer is a seductive metaphor. According to Edwin Hutchins, “The last 30 years of cognitive science can be seen as attempts to remake the person in the image of the computer.” See Cognition in the Wild (1996).
Metaphors are models, however, and models are imperfect versions of the reality they portray. Metaphors accentuate certain parts of reality while downplaying other parts.
Unfortunately, many people “reify” the brain-as-computer metaphor: they accept this metaphor as literal truth, leading to various misunderstandings about human cognition.
Here’s another big difference between brains and computers: human cognition is fault-tolerant and robust. In other words, our minds continue to function even when the information is incomplete (e.g., while we’re driving in the rain) or when our purposes or options are unclear (e.g., navigating a cocktail party). Computers, on the other hand, are always one line of code away from freezing up.
In Bright Air, Brilliant Fire: On the Matter of the Mind (1992) Gerald M. Edelman writes that “The world is not a piece of [computer] tape . . . and the brain is not a computer.” Brain as computer invites rampant functionalism: that any old hardware will do. I could implement my own cognition on any other piece of hardware.
Just because brains and desktop computers often arrive at similar results, though, doesn’t mean that the brain works like a computer. Edelman also points out people often believe that there are computer-like rules that govern thoughts, that the brain thinks by manipulating context-free symbols according to some sort of “rules” that have yet to be specified. To have any sort of “rules,” though, there must first be uncontested “facts.” But there is no such thing as context-free facts. Perhaps there could be if people used identical methods of categorizing the world. Contrary to what many people believe, however, human categorization does not occur by use of necessary and sufficient conditions. See Cognitive Psychology: An Overview for Cognitive Scientists, by Larry Barsalou (1992) and Women, Fire, and Dangerous Things, by Lakoff, George (1987). The world is unlabeled. Without pre-labeled “things,” computers flounder. Human brains are different. They thrive primarily on pattern matching, something with which computers struggle. See What Computers Still Can’t Do, by Hubert L. Dreyfus (1992).
Scott Kelso points out that the brain is not a computer that manipulates symbols. “The nervous system may act as if it were performing Boolean functions . . . People can be calculating, but the brain does not calculate.” See Dynamic Patterns (1995). Even those who believe that the brain is (an extremely sophisticated) machine, cognitive scientists such as Patricia Churchland, warn us to handle the computer metaphor with extreme caution. We are pattern matchers and pattern completers. Neurophilosophy: Toward a Unified Science of the Mind/Brain, Patricia Smith Churchland (1986).
As Andy Clark points out, we are great at Frisbee, but bad at math. See Being There: Putting Brain, Body and World Together Again, by Andy Clark (1997). Clark suggests that a better understanding of the brain is that it is a complex ever-evolving control system, connecting brain, body and world. Paul Churchland also notes that we are horrible at logic and other types of systematic thinking. How many years do we study math, but look how we still struggle as adults! If the brain were a computer, this would not be the case. The Engine of Reason, the Seat of the Soul, Paul M. Churchland (1995).
The “frame problem” is another proof that brains are not like computer. We can almost instantly bring relevant information to bear. No computer can do this like human brains.
Because of these many problems, William Bechtel concludes that the brain as computer metaphor is now dated: “[T]he inspiration for developing accounts of how cognition works is no longer the digital computer; instead, knowledge about how the brain works increasingly provides the foundation for theoretical modeling.” A Companion to Cognitive Science,” ed. by W. Bechtel and G. Graham (1998).
—
Why does it matter whether we ignore all of this evidence and insist that the brain is a computer? Here are some reasons:
- The brain-as-computer metaphor sees the brain as hardware, insisting that the all people for whom meaning can be shared do so by manipulating the same symbols in their heads based on the same “rules.” This view overlooks the tremendously complex and idiosyncratic wiring that makes your brain different than mine. As though that lifetime of wiring and pruning of tens of billions of neural connections wasn’t integral to you being you! As though there isn’t a critical connection between that three-pound wet “computer” in your head and your body!
- Insisting that brains are computers makes brains commodities, thereby denigrating the sanctity and idiosyncratic history of each individual.
- The brain as computer fails to explain how words can have meaning. What do symbols in the head ultimately refer to? More symbols? That’s a non-starter. An alternate approach to cognition, embodied cognition gives word meaning roots. http://dangerousintersection.org/?p=177
- Because the brain-as-computer metaphor sees thinking as symbol-manipulation in the head, it fails to explain the connection between world, body and cognition. It also ignores the well-established interplay between emotion and rationality. http://dangerousintersection.org/?p=146
- The brain as computer metaphor can erroneously lead to a belief in disembodied thought, along with related mischief, such as the possibility of the fully-functioning disembodied soul.
- None of the above is to deny that the brain can sometimes be seen, for limited purposes, to be like a computer. This comparison can is be fun and sometimes useful, but we must be careful that we don’t reify the brain-as-computer metaphor. Why? Because the brain is not a computer.
Signed,
A Head in a Jar
i think your using a strict interpretation of the word "computer". Obviously the brain is not very similar to that piece of silicon and plastic on your lap or under your desk; however, the comparison of the brain to an analog computer or an Field-Programmable Gate Array (FPGA), where the "software" is the structure of the computer itself, seems fairly accurate. furthermore, the brain has a huge number of modules that have very different functions, like color correction, shape detection, and pattern matching. we've seen that a missing gene or a traumatic injury to the brain can knock out one of these modules, e.g. facial recognition. granted, these aren't always, if ever, physically bound to a clump of neurons in a particular part of the brain, but they tend to be localized and function as units, and can be reused for different purposes. We use many of the same components for actual visual perception that we do when we imagine objects in the minds eye (see Pinker's "How the Mind Works" for more examples). Lakoff also brings this to attention with his research or metaphor (up/down, greater/less, etc), as you've noted in the entry to linked to in this one.
in such a model, one might compare the environment to software, where stimuli are responsible for behavior of both the cognitive "hardware" modules and the actions of the person, like a program is responsible for the behavior the electronic components of a computer and produce output in the form of text on a monitor.
i hope there was at least a little sense in that rant and that i address some of the concerns you enumerated. i think if you broaden the definition, you needn't worry about thinking of the brain-as-computer model as pure symbol manipulation. nor do i think one needs to seperate emotion and rationality so cleanly. obviously emotion (in that it serves as, and for all intents and purposes is, motivation) drives "rationality", whatever that is.
i'd spend more time on it, but i really ought to get back to programming before my boss asks me to come in on saturday….
I might have to disagree here. He gives reasons for his argument such that just because there is not central processing unit within the brain that and that the neurons don't all connect to a central unit. Those statements I believe to be true, but also think about how data gets from an input device to the processor it has to go through many similar channels to reach specific parts, much like the brain.
Jim:
Your comment is good food for thought. You've compared the brain to analog computers and Field-Programmable Gate Arrays (FPGA), where the “software” is the structure of the computer itself. I admit that I don't know anything about these devices. If you have a basic reference as to how those machines work, I'd be interested.
I don't deny that a missing gene or a traumatic injury can knock out what have often been termed mental "modules." My point was that the human brain displays great plasticity in compensating for many defects defects. For example, there is evidence that what would have been areas of the brain dedicated to hearing actually change over to serve visual functioning in deaf people. To contrast, one tiny piece of code can cause a PC to freeze (I've actually seen this about 500 times this year!). My children also freeze up on occasion, especially when I tell them to clean their rooms, but I think that's a different issue altogether.
Pinker's book is a good one, indeed.
Thank you for your comment.
Responding to Erich's question about analog computers and field-programmable gate arrays (FPGA), Wikipedia has some good explanations:
http://en.wikipedia.org/wiki/Analog_computer http://en.wikipedia.org/wiki/FPGA
In a nutshell, an analog computer is an electronic circuit that mimics the behavior of some other physical system — e.g., a chemical reaction, a mechanical assembly, the trajectory of a ballistics projectile, etc. The analog computer works because electronic components (resistors, capacitors, inductors, etc.), chemicals (moderators, catalysts, flow rates, etc.), mechanical assemblies (springs, dampers, lever arms, etc.), falling objects (mass, air resistance, coefficients of friction, etc.), and many other physical systems, can all be modeled using the exact same mathematical equations. Thus, an analog computer can serve as a substitute — or analog — for building an actual prototype of a physical device. This is a huge benefit to engineers and designers, because an analog computer can be quickly and easily reprogrammed, enabling them to evaluate many different prototype designs without actually building them. Analog computers were popular in the days before digital computers, but were replaced by digital computers because the latter can be reprogrammed even more quickly and accurately. Reprogramming an analog computer involves physically rewiring it, using jumper wires, and this involves time and the risk of human error. Reprogramming a digital computer merely involves loading and running a different software program. Yes, the program takes time to create, but the process is still much faster, more efficient, more reliable, more robust, etc., than is rewiring.
An FPGA is an electronic device that comes from the factory as a huge grid ("array") of generic logic components ("gates") — think of it as a tiled floor, where each tile contains every possible logic gate (AND, OR, NOT, etc.). An electrical engineer then uses the FPGA to create some other electronic device (a controller for a microwave oven, for example) by setting the logic gate inside each 'tile' and connecting them together (the FPGA also has its own interconnective wiring). FPGAs are nice because then can be easily reprogrammed by the engineer ("field programmable") to change the circuit or fix a mistake. Thus, they are good for prototyping or when a design is undergoing frequent change.
Now, going back to Jim's comment, the reason he chose the analog computer and the FPGA to make his point about 'structure equals function' is because the analog computer and the FPGA:
1) are both general-purpose devices; i.e., they come from the factory with the ability to be physically reconfigured into many, many different circuits, and;
2) function according to how they are physically reconfigured.
Thus, in a sense, they are like the human brain: a newborn brain is a 'general-purpose' device that becomes physically reconfigured (and develops specific function) as the child learns.
Of course, a far better analog to the human brain is the neural network — an electronic circuit that not only has the two traits mentioned above, but also has feedback circuitry that causes the device to physically reconfigure itself as it 'learns.' Indeed, some neural networks are built such that their creator cannot even ascertain their internal physical structure after they 'learn,' making them even more like a human brain.
Let us say that it is 1972 and the 4004 microprocessor chip set has recently been invented by Intel and we have 1000 of these 4004 chip sets in front of us. Here we are at the forefront of microprocessor technology and we decide to make a machine which automatically recalculates spreadsheets for people. The 4004 is slow but we figure that we will support spreadsheets with up to one thousand cells and use one processor for each cell. Our logic is sound because we know that parallel processing is OBVIOUSLY the best way to calculate a spreadsheet in which each cell's contents depends on the contents of the other cells. It sure pays to be smart and recognize that parallel jobs should be implemented with parallel hardware. Once we have the spreadsheet computer working we start selling it for $120,000 per copy. We obtain a patent on our technique of using parallel processors to solve spreadsheet problems. We mortgage our homes and put all of the money into our company because we absolutely know that we have the very best way to calculate spreadsheets, and it is patented. We have a lock on the market because we KNOW that parallel jobs are most efficiently implemented with parallel hardware. If venture capitalists doubt our plan, we tell them to talk to Gerald M. Edelman, who knows the difference between jobs which are obviously serial and jobs which are obviously parallel.
The arguments being used regarding human brain simulation with a PC could be used to argue that:
There will never be a PC computer which will be capable of doing spreadsheets, because the structure of a PC is “all wrong” for the problem being solved.
Anybody want to buy my parallel processor spreadsheet computer? Marked down to $10,000, which is a bargain considering that it has 1000 processors and weighs a ton.
don
A mind is a terrible thing to waste, but the hardware in which a mind resides may take many forms. The prefrontal cortex of our brain acts as a referee between all the other processor centers to select actions that are more sophisticated than those already pre-programmed (spinal reflexes, learned responses, etc).
A researcher (Mark Tilden) at
SandiaLos Alamos has created insect-like robots with a cortex consisting not of not megabytes or even kilobytes, but just 12 transistors. This unlikely candidate out-performs Pentium-based robot mice in solving mazes.These 12 transistors exhibit memory and learning. How does this happen? Because the information is stored in waveforms, not in bits.
Shut it down, and it loses everything. It has to be taught again how to walk. Now this seems to be a likely path toward understanding the difference between a mind, and just a computer.
Where is your reference to this device?
From above: "Paul Churchland also notes that we are horrible at logic and other types of systematic thinking. How many years do we study math, but look how we still struggle as adults! If the brain were a computer, this would not be the case. The Engine of Reason, the Seat of the Soul, Paul M. Churchland (1995)."
I love Paul and Patricia Churchland. If you haven't read the Feb 12, 2007 New Yorker article about them, it is a must read.
Nevertheless, note the recursive and reflexive qualities of Paul's argument. Humans cannot think logically and systematically so here is an illogical and unsystematic argument which you humans will believe simply because you aren't very logical and systematic.
The logical and systematic truth is that life evolves in a particular environment and must evolve those qualities which allow it to survive in that specific environment. In a jungle, an organic biped omnivore must evolve abilities of agility, dexterity and thinking for the purposes of hunting, gathering, eating and reproducing.
On a desktop, a computer finds itself in a totally different environment. The eating and reproducing are all taken care of by electrical outlets and computer factories. In that environment, a logical and systematic entity would notice that computers should evolve those different qualities which are lacking yet needed in its environment. Those specific qualities are obviously "math, logic and systematic thinking".
Now that human hunting has been converted to gathering by the McDonald's drive-thru, humans have a need for less hunting and more logic and systematic thinking. The problem is that organic entities evolve so slowly. So, the need for logic and math is being fulfilled by computers.
Nevertheless, that doesn’t mean that humans, given enough time to evolve, couldn’t eventually have brains which could be as logical and as systematic as computers. Maybe genetic engineers can speed up the process.
Bottom line is that brains are just like computers only they evolved to emphasize different qualities because they evolved in different environments.
don
Evolution is a widely-used and complex term. http://en.wikipedia.org/wiki/Evolution_%28disambi…
Computers are not sentient, do not have the desire to replicate, and cannot acquire raw materials from the physical environment. (However, robots which contain multiple computers do indeed seem to fit the criteria a BIT better.)
This topic is thought-provoking and has been featured in science fiction.
http://en.wikipedia.org/wiki/Sentient_computer
Brains and computers have some similarities in terms of application, but not so much in terms of function.
It took me half and hour of diligent Googling to find an exact reference to these 12-transistor robots: I first saw it mentioned briefly in Scientific American. A few months after that, it was the cover story of the Feb 2000 Smithsonian Magazine: Redefining Robots article about Mark Tilden at Los Alamos, not Sandia! That was my mis-attribution.
The brain does freeze up. Consider stammering, laughter and orgasm. Can you count to three during an orgasm. The brain software also has a ctrl-alt-del function for its user, via a yawn. When the brain computers I/O map is illegal a local reboot occurs. The biggest reboot is the orgasm, almost
a total shutdown/restart. Thats why laughter is so beneficial and why stammering can be effected by altering the I/O map. The brain is a computer in that it is a software centered device. Can information be manipulated without software? Parkinsons disease can be eased by placing electrodes in the brain at exact positions. Thus modifying an I/O map and reducing the
repetitive rebooting/shaking. Now thats something to think about.
Alan writes: "Parkinsons disease can be eased by placing electrodes in the brain at exact positions. Thus modifying an I/O map and reducing the
repetitive rebooting/shaking."
Indeed, placing electrodes in the brain can trigger all sorts of things, from memories of past events, to sensory nerve sensations, to motor nerve responses. In some types of brain surgeries, the brain is mapped with electrodes in this manner, so surgeons can try to save regions of particular value, or destroy regions that are causing problems. Some patients with epileptic seizures, for example, have been successfully treated with focal dissection to "modify the I/O map." Using an EEG, plus very sophisticated mapping software, neurosurgeons localize the seizure foci, then they destroy the brain cells at the foci — the hope being that by removing the site(s) where the seizure begins, the seizures might stop. This is an extreme measure, but for patients who suffer hundreds of seizures per day, it might be their only hope.
Notwithstanding the Bush Administration's cuts to stem cell research, the next twenty years are likely to bring many amazing developments in neuroscience. As Baby Boomers age, one of their greatest fears is mental incapacity, so they will want money to flow toward brain research. Medical science has already solved many of the problems of cardiac failure, cancer, and other causes of death in the elderly, so finding preventions of and treatments for brain failure is the next holy grail.
Not that I want to get involved in the discussion, but here is a concise comparison…
http://scienceblogs.com/cognitivedaily/2007/03/th…
Great discussion guys, and ladies.
I am writing a philosophy of cognitive science essay on the topic
""In what ways could the brain be identical to a computer"
I am confused as to where to start my discussion from, and what the structure of my essay should be.
After a lot of googling i stumbled upon this website.
Please any pointers/help will be of grate help to me as i am having sleplees nights on this issue.
Thanks a million and and my sincere apolgies to those unconcerened
Hey there Sammy, maybe an interesting aspect to include is the fact that humans seem to "want" ourselves to be somehow "better" than the computer. Even though our "best brains" get trounced by computers in calculating power, there is always that "human" aspect which seems to get lost.
However, at least in terms of the Chess world, it seems that times have changed. The human grandmasters had been able to beat the computers, based on inconceiveable subtleties of reason which the even the strongest computers could not seem to fathom. Presently the computers are actually taking the lead in terms of most subtle variations and producing creative new strategies. It seems ironic, but now all of the best chess computer programmers are no longer trying to emulate the human mind (albeit fascinating and unique) and are instead now more intrigued by the computer "mind".
Some have theorized that a "mind" need only be of finite complexity. Therefore computers could soon have "minds", at least in terms of chess.
(PS. Remember to use a spellchecker and have a *person* proofread the final paper.)
Samson: "Computer" describes the entire class of calculating machines. There are analog and digital computers. Within digital, there are binary, trinary, and (mostly obsolete) BCD machines. Computers have been built from gears, relays, vacuum tubes, semiconductors, superconductors, and protein chains (custom DNA). Quantum computers of several varieties are under development.
Some computers are designed to a task, and others are rigidly programmed. There are heuristic computers (those that learn and self-program) and rigidly programmed computers (like your home computer). There are now computers that are rigidly programmed to learn at a particular level of abstraction (like spam filters or speech recognition). Computers have been programmed as environments in which software can grow, reproduce, compete, and evolve to a best fit for a particular task.
A brain is a thickening at one or both ends of a spinal chord. Both? Ask a paleontologist. Larger brains are made up of several distinct and interconnected analog processors, many of which are present in all chordate animals (but different sizes). The brain is the interface between the higher senses (sight, sound, taste/smell, magneto, etc) and the parts of the body that move, digest, and reproduce. Touch is a primitive sense that the brain receives, but the body can react before the brain is even alerted to the touch (reflexes).
The mind involves more than just the brain. The endocrine system, the neural network, and the myelinated nerves of the body all affect the mind.
To write about the topic, start with a simple example. A single case to support. Once you fully understand the one example, reexamine your general topic.
To learn more: Scientific American back issues in libraries are a good resource. Their co-publication Scientific American Mind is more focused on issues of brain, mind, and consciousness.
@ Ben & Dan Klarmann & "Nicksmithdesign(Moderator)"
Thank you all for your help,comments & contibutions, i cant emphasise enough how it has helped me in tackling the topic at hand.
More of your ideas please, thanks.
Here are 11 more ways in which brains are different than computers:
Difference # 1: Brains are analogue; computers are digital
Difference # 2: The brain uses content-addressable memory
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
Difference # 5 – Short-term memory is not like RAM
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
Difference # 7: Synapses are far more complex than electrical logic gates
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Difference # 9: The brain is a self-organizing system
Difference # 10: Brains have bodies
Bonus Difference: The brain is much, much bigger than any [current] computer
These differences were posted at "Developing Intelligence." The post includes a link to a detailed lecture on this topic.
This brain-computer dichotomy list is misleading in many places. It is less wrong if "computer" means the common desktop one in front of you. But there are analog computers. There are parallel computers with asynchronous processing. There are computers with as many gates as there are synapses in the brain. There are even computers that store memory much like synapses do (in which 12 transistors have outperformed million-transistor Intel devices, see above). There are experimental computers that use biological parts to perform certain functions. There actually are some hardware/software distinctions in the brain; that's why brain surgery is possible.
The articles Erich to which linked elaborate on and soften the points. But the simple list looks like an argument from CDesign Proponentsists explaining how something that different must have had an omniscient designer.
From "Why Minds are not like computers," by Ari Schulman:
Comment by A.C.Grayling, in a book review at Salon.com:
"Minds are not brains. Please note that I do not intend anything non-materialistic by this remark; minds are not some ethereal spiritual stuff a la Descartes. What I mean is that while each of us has his own brain, the mind that each of us has is the product of more than that brain; it is in important part the result of the social interaction with other brains. As essentially social animals, humans are nodes in complex networks from which their mental lives derive most of their content. A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on a fMRI scan. The historical, social, educational, and philosophical dimensions of the constitution of individual character and sensibility are vastly more than the electrochemistry of brain matter by itself."
http://www.salon.com/books/feature/2010/04/26/wis…
"The brain as computer metaphor can erroneously lead to a belief in disembodied thought, along with related mischief, such as the possibility of the fully-functioning disembodied soul."
I don't see how that would work — where are the hardwareless computers that would be required for that reasoning to make sense?
The opposite seems much more plausible to me: If you didn't have the example of computers to show that matter can be arranged to store memories and then manipulated to arrive at conclusions, it could erroneously lead you to believe that we need an immaterial soul.
Developing Intelligence has set forth "Ten Reasons Why the Brain is not a Computer" http://scienceblogs.com/developingintelligence/20…
Difference # 1: Brains are analogue; computers are digital
Difference # 2: The brain uses content-addressable memory
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
Difference # 5 – Short-term memory is not like RAM
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
Difference # 7: Synapses are far more complex than electrical logic gates
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Difference # 9: The brain is a self-organizing system
Difference # 10: Brains have bodies
Bonus Difference: The brain is much, much bigger than any [current] computer
This latest list of distinctions is largely false unless one narrowly defines "computer" as "commercial, digital, serial computer" to distinguish it from the many analog and parallel computer systems that have been in use (Scratch #'s 1,3,4, & 5).
Also, there is a distinction between hardware and software in brains, as any fMRI specialist can tell you (#6).
And synapses are more complex than simple binary silicon logic gates, but not more than some other technologies under development (#7).
"Brains have bodies"? As opposed to autopilots having planes, or self-navigating vehicles?
I could quibble philosophically about "self-organizing," as I know how dependent computers are on computers to design each iteration of their evolution. But brains do literally rewire themselves as they learn, most markedly in the first couple of dozen years, for humans.
The overlap in function of memory and calculation is also no longer unique, as Tilden demonstrated over a decade ago.
I guess a Jeopardy win isn't sufficient demonstration of massive parallelism. I got to play with a huge analog computer some 30 years ago – it was an odd incarnation; I know that modern analogs are far more sophisticated. Anyway, about the same time as that fun experiment, I came across a dictionary published in the 1930s. I remember looking up "computer": one who computes.
{Update} – I didn't see until after my comment that article linked was from 2007, so the Jeopardy win would be unknown, but massive parallelism was not – I even participated in SETI by downloading a screensaver (from their site) that used the screensaver downtime to aid in processing the huge streams of data.