The August 6, 2009 edition of Nature (available online only to subscribers) includes a fascinating letter by Ajit Varki, a Professor of Medicine and Cellular & Molecular Medicine at the University of California San Diego, La Jolla.
Varki begins his letter by recognizing some of the unique features of human animals, such as theory of mind, “which enables inter-subjectivity.” These impressive human cognitive abilities might have been positively selected by evolution “because of their benefits to interpersonal communication, cooperative reading, language and other critical human activities.”
Varki then describes his conversations with a geneticist named Danny Brower (now deceased), who was fascinated with the question of why theory of mind emerged only recently, despite millions of years of apparent opportunity. Brower offered Varki a tantalizing explanation for this delay:
[Brower] explained that with the full self-awareness and inter-subjectivity would also come awareness of death and mortality. Thus, far from being useful, the resulting overwhelming fear would be a dead end evolutionary barrier, curbing activities and cognitive functions necessary for survival and reproductive fitness. . . . in his view, the only way these properties could become positively selected was if they emerged simultaneously with neural mechanisms for denying mortality.
In other words, self-awareness is a double-edged sword that tends to kill off (through terror-induced paralysis) those who become too readily self-aware. Therefore, self-awareness evolved together with denial of death–Brower was suggesting that those who became too clearly self-aware would become incapacitated by something of which chimpanzees, dolphins and elephants remain blissfully ignorant: the fact that they will inevitably die.
Varki suggests that Brower’s idea would not only add to ongoing discussions of the origins of human uniqueness, but it could shed light on many puzzling aspects of human psychology and culture:
[I]t could also steer discussions of other uniquely human “universals,” such as the ability to hold false beliefs, existential context, theories of afterlife, religiosity, severity of grieving, importance of death rituals, risk-taking behavior, panic attacks, suicide and martyrdom.
Perhaps we are simply incapable of viewing life “objectively,” in that evolution has rigged us up with equipment that protects us by deluding us. It seems, then, that the co-evolution of delusion and awareness (if this is the case) dovetails quite well with Terror Management Theory (TMT), which I summarized in a post entitled “We are gods with anuses: another look at “terror management theory”:
The problem is that the evolution of our powerful ability to be conscious made us aware that we are mortal beings and that all of us are heading toward inevitable death. The “solution” is also offered by our highly developed cognitive abilities: we have developed the ability to wall off our cognitively toxic fear of death by “objectifying” our existences and living idealized lives free from fear of death.
Brower and Varki thus suggest that the ability of humans to be extraordinarily aware and curious is too dangerous to be dispensed by evolution in its pure form. Too much knowledge can might be too dangerous. To safely allow the continuation of the species, human awareness might need to be deluded and distorted in ways that account for some of the most baffling “cultural” aspects of what it means to be human.
This approach sounds promising to me, though it also raises many other questions, such as this one: Why are some of us apparently immune from these delusions? Why are some of us much more able to disbelieve claims of gods and afterlives?
According to Sharon Begley’s article at Newsweek, “Lies of Mass Destruction,” people are susceptible to upside down reasoning. She cites a large team of researchers who studied the people who believe the lie that Saddam Hussein caused 9/11. The researchers concluded that these believers believed that lie because the U.S. invaded Iraq. They refer to this upside-down process as “inferred justification.” Begley sums it up:
Inferred justification is a sort of backward chain of reasoning. You start with something you believe strongly (the invasion of Iraq was the right move) and work backward to find support for it (Saddam was behind 9/11). “For these voters,” says Hoffman, “the sheer fact that we were engaged in war led to a post-hoc search for a justification for that war.”
The researchers published their findings in a paper entitled “There Must Be a Reason”: Osama, Saddam, and Inferred
Justification.” Here’s an excerpt from Sociological Inquiry.
The primary causal agent for misperception is not the presence or absence of correct information . . . Our explanation draws on a psychological model of information processing that scholars have labeled motivated reasoning. This model envisions respondents as processing and responding to information defensively, accepting and seeking out confirming information, while ignoring, discrediting the source of, or arguing against the
substance of contrary information. Motivated reasoning is a descendant of the social psychological theory of cognitive dissonance, which posits an unconscious impulse to relieve cognitive tension when a respondent is presented with information that contradicts preexisting beliefs or preferences. Recent literature on motivated reasoning builds on cognitive dissonance theory to explain how citizens relieve cognitive dissonance: they avoid inconsistency, ignore challenging information altogether, discredit the information source, or argue substantively against the challenge. The process of substantive counterarguing is especially consequential, as the cognitive exercise of generating counterarguments often has the ironic effect of solidifying and strengthening the original opinion leading to entrenched, OSAMA, SADDAM, AND INFERRED JUSTIFICATION polarized attitudes. This confirmation bias means that people value evidence that confirms their previously held beliefs more highly than evidence that contradicts them, regardless of the source.
In her article, Begley suggests that the current health care debate stems from the same cognitive vulnerabilities.
There are legitimate, fact-based reasons to oppose health-care reform. But some of the loudest opposition is the result of confirmatory bias, cognitive dissonance, and other examples of mental processes that have gone off the rails.
Quick! Name a small and numerous component in the brain that allows us to think.
If you said “neuron,” you would be only partially correct. According to Carl Zimmer’s blog at Discover, “The Loom,” evidence is accumulating that thinking is also accomplished by astrocytes
—named for their starlike rays, which reach out in all directions—are the most abundant of all glial cells and therefore the most abundant of all the cells in the brain. They are also the most mysterious. A single astrocyte can wrap its rays around more than a million synapses. Astrocytes also fuse to each other, building channels through which molecules can shuttle from cell to cell.
To put glia into a broader perspective, consider Zimmer’s introduction to his post on glia:
I’ve asked around for a good estimate of how many neurons are in the human brain. Ten billion–100 billion–something like that, is the typical answer I get. But there are actually a trillion other cells in the brain. They’re known as glia, which is Latin for glue–which gives you an idea of how little scientists have thought of them.
It has now been shown that astrocytes can sense incoming signals, respond with calcium waves, and produce outputs
In other words, they have at least some of the requirements for processing information the way neurons do. Alfonso Araque, a neuroscientist at the Cajal Institute in Spain . . . find that two different stimulus signals can produce two different patterns of calcium waves (that is, two different responses) in an astrocyte. When they gave astrocytes both signals at once, the waves they produced in the cells was not just the sum of the two patterns. Instead, the astrocytes produced an entirely new pattern in response. That’s what neurons—and computers, for that matter—do. If astrocytes really do process information, that would be a major addition to the brain’s computing power . . . neuroscientist Andrew Koob suggests that conversations among astrocytes may be responsible for “our creative and imaginative existence as human beings.”
Pyschiatrist Randolf Nesse is a gifted writer who I have followed for many years. I first learned of Nesse’s work when I read Why We Get Sick: The New Science of Darwinian Medicine. Nesse is one of the many respondents to this year’s annual question by Edge.org: “What will change everything?”
Nesse’s answer: RECOGNIZING THAT THE BODY IS NOT A MACHINE
As we improve our knowledge of bodies, they don’t fit very well within our venerable metaphor of the body as a “machine.” One of his points is that we can describe machines, whereas a satisfying description of bodies seems so elusive. The complexity of the body is, indeed, humbling:
We have yet to acknowledge that some evolved systems may be indescribably complex. Indescribable complexity implies nothing supernatural. Bodies and their origins are purely physical. It also has nothing to do with so-called irreducible complexity, that last bastion of creationists desperate to avoid the reality of unintelligent design. Indescribable complexity does, however, confront us with the inadequacy of models built to suit our human preferences for discrete categories, specific functions, and one directional causal arrows. Worse than merely inadequate, attempts to describe the body as a machine foster inaccurate oversimplifications. Some bodily systems cannot be described in terms simple enough to be satisfying; others may not be described adequately even by the most complex models we can imagine.
[Related DI post: The Brain is not a Computer]