Does moral action depend on “reasoning?” The Temple Foundation has assembled various prominent thinkers and sought their answers.
Neuroscientist Michael Gazzaniga’s essay is devoid of any ghost in the machine:
What if most humans, regardless of their culture or religious beliefs or age or sex, chose the same option when faced with a moral conflict? What if those same people gave wildly different reasons for why they made their particular choices? This, in fact, is the state of affairs for much of our moral behavior. Recent research in human brain science and ancillary fields has shown that multiple factors feed into the largely automatic and deterministic processes that drive our moral decisions.
Gazzaniga cautions us his mechanistic view of human decision-making does not make obsolete “the value of holding people in a society accountable for their actions, though it does suggest that the “endless historical discussion” of “free will and the like has little or no meaning.”
What evidence substantiates Gazzaniga’s view?
First, most scientific research shows that morality is largely universal, which is to say, cross-cultural. It is also easily revealed to be present in young infants. It has a fixed sequence of development and is not flexible or subject to exceptions like social rules. Indeed, recent brain-imaging studies have found that a host of moral judgments seem to be more or less universally held and reflect identifiable underlying brain networks. From deciding on fairness in a monetary exchange to rendering levels of punishment to wrongdoers, the repertoire of common responses for all members of our species is growing into a rich list. [Further,] all decision processes resulting in behaviors, no matter what their category, are carried out before one becomes consciously aware of them.
Psychologist Joshua D. Greene compares morality to cameras:
My camera has a set of handy, point-and-shoot settings (“portrait,” “action,” “landscape”) that enable a bumbler like me to take decent pictures most of the time. It also has a manual mode that allows me to adjust everything myself, which is great for those rare occasions when I want to try something fancy. A camera with both automatic settings and a manual mode exemplifies an elegant solution to an ubiquitous design problem, namely the trade-off between efficiency and flexibility. The automatic settings are highly efficient, but not very flexible, and the reverse is true of the manual mode. Put them together, however, and you get the best of both worlds, provided that you know when to manually adjust your settings and when to point and shoot.
The human brain employs a similar hybrid design. Our brains have “automatic settings” known as emotions. . . . Our brains also have a “manual mode,” an integrated set of neural systems that support conscious reasoning, enabling us to respond to life’s challenges in a more flexible way, drawing on situation-specific knowledge . . . Recent research has shown that moral judgment depends critically on both automatic settings and manual mode.
Not one of the responses is an unmitigated “yes.” Clearly, there is something more going on in moral beings than “reasoning” (I am assuming that “reasoning” refers to some sort of conscious “application” of “moral rules”). Hence, the participants spend quite a bit of their energy on what else is going on in moral action, above and beyond “moral reasoning.” Most of them focus on the emotions, which is something that attracted David Hume’s attention centuries ago:
“Reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”
[A Treatise of Human Nature, (2nd Ed.), Book II, Part I, Section III (“Of the influencing motives of the will”) (1739)].
I would agree that there is much more to moral action than “reasoning.” If moral action were based on “reasoning,” then well-informed people would tend to share the same values, and this is obviously false.
Like all vague questions, Templeton’s question invites the participants to sketch out their views, which include many definitional excursions. Reading these short essays was enjoyable and thought-provoking, but to fully engage with some of the arguments, one must be willing to assume that it is meaningful to speak of “reasoning” as distinct from emotion, which I am increasingly unwilling to do. Some of the essays also speak of reasoning as though it were a solitary endeavor, which I am equally unwilling to do.
Many of the essays gravitate toward the question of “free will” (e.g., John Kihlstrom’s response) which would seem to me to be a prerequisite to discussing Templeton’s question.
Again, this is a good collection of thoughtful answers that I found enjoyable, despite the fact that I didn’t agree with some of the points made.