Assume that Frans De Waal is correct when he writes that empathy is the foundation of morality, in that it wells up from deep in our bones and that it evolved over many years in our ancestors. What, then, are the functions of the moral rules and moral maxims (and yes, Commandments) that we hear every hour of every day? If these rules aren’t the wellspring of our inclinations to be kind and decent (and sometimes violent), what function do they serve? After all, it certainly seems that we are oftentimes guided by our moral rules, even if those rules don’t account for that deep empathy that fuels our conduct.
Philosopher of cognitive science Andy Clark considered this issue in a chapter titled “Connectionism, Moral Cognition, and Collaborative Problem Solving,” found in an excellent anthology titled Mind and Morals, (edited by Larry May, Marilyn Friedman and Andy Clark (1996). This anthology, based on a conference that occurred at Washington University, explores the interconnections between moral philosophy and cognitive science.
Clark sees moral rules and maxims “as the guides and signposts that enable collaborative moral expiration rather than as failed attempts to capture the rich structure of our individual moral knowledge.” He recognizes that many people see moral rules as establishing “necessary and sufficient conditions” for acting in certain ways. These people would argue that one is moral by knowing the moral concepts and applying those concepts to real world facts.
Clark disagrees. He refers to the work of Eleanor Rosch in asserting that “most (perhaps all) human concepts do not possess “classical” structure. Thus, where the classical model predicts that instances should fall squarely within or outside the scope of a given concept (according to whether the necessary and sufficient conditions are or are not met), robust experimental results reveal strong so-called typicality effects. Instances are classified as more or less falling under a concept or category according, it seems, to the perceived distance of the instance from prototypical cases.”
Clark cautions that when used in his discussion, a “prototype” is not a real life concrete exemplar. “Rather, it is the notion of the statistical central tendency of a body of concrete exemplars.” He explains that a prototypical “pet” may include the features of both dogs and cats, for example. Most of us, most of the time, are repulsed by images of one person harming an innocent person. Although Clark does not use this example in his work, perhaps we have within us a prototype representing that type of situation; when we encounter a real life situation that resembles it, it sends a negative signal to us and we are repulsed by the real-life scenario.
Prototypes mesh well with the connectionist approach to information storage in the brain. Consider, for example, the work of Paul Churchland, who advocated that information in the brain takes the form of a state space representation of a “broadly connectionist type.” In short, exemplars are forged upon real world experience (and these prototypes will be somewhat different for each of us) serve as navigational beacons for us (even though we aren’t actually aware of the particular prototypes themselves, since they exist in the form of what seems to be a multilevel connectionist system that occupies our extraordinarily complex 3-pound brains.
“Connectionist networks” constitute one way of both implementing and acquiring representational space [of the type described by Paul Churchland’s work]. Such networks consist of a complex of units (simple processing elements) and connections. The connections may be positive or negative valued (excitatory or inhibitory). . . . Several layers of units may intervene between input and output. Clark describes each unit of these “hidden” layers as “one dimension of an acquired representational state space.” The great achievement of connectionism “is to have discovered a set of learning rules that enables such systems to find their own assignments of weights to connections.”
As I read this work by Clark, I couldn’t help but think of all the superhero comic books I have read in my life, and I wonder whether those stories might have embedded themselves in the form of prototypes that serve as some of my personal moral beacons. But, still, even assuming that we are generally guided by a prototype-driven pattern recognition system, what are the functions of moral rules?
Clark has several suggestions. First of all, Clark sees language as a “manipulative tool.” He is convinced that moral debate is not a matter of simply applying facts to moral rules: Moral debate does not work by attempting to trace out nomological-deductive arguments predicated on “linguaform axioms.”
But summary moral rules and linguistic exchanges may nonetheless serve as context-fixing descriptions that prompt others to activate certain stored prototypes in preference to others. [Thus], a moral debate may consist in the exchange of context fixers, some of which push us toward activation of an ‘invasion of privacy’ prototype while others prompt us to conceptualize the very same situation in terms of a ‘prevention of espionage’ prototype.
(Page 118) Thus, moral debate often takes the form of war, where our weapons are attempts to trigger certain prototypes residing in the other person and dampening other prototypes residing in our opponent. “Moral rules and principles on this account are nothing more than one possible kind of context-fixing input among many. . . . Others could include well-chosen images or non-rule-invoking discourse thus understood, language simply provides one with a fast and flexible means of manipulating activity within already developed prototype spaces [within the mind of one’s opponent in a debate about morality]. Clark also writes (page 119) that the role of moral rules (including high-level policies) appears to be to “alter the focus of attention for subsequent inputs.” This idea of the function of moral rules very much reminds me of Jonathan Haidt’s description of a person as a lawyer writing on top of an elephant.
Therefore, moral rules
may help us monitor the outputs of our online, morally reactive agencies. When such outputs depart from those demanded by such policies, we may be led to focus attention on such aspects of input vectors as might help us bring outputs back into line. Suppose we explicitly commit ourselves to an ideal of acting compassionately in all circumstances. We then see ourselves reacting with anger and frustration at the apparent ingratitude of a sick friend. By spotting the local divergence between our ideal and our current practice, we may be able to bias our own way of taking the person’s behavior–in effect, canceling out our representation of those aspects of the behavior rooted in their feelings of pain and impotence. To do so is to allow the natural operation of our onboard reactive agencies to conform more nearly to our guiding policy of compassion. The summary linguistic formulation, on this account, is a rough marker that we used to help monitor the behavior of our trained up networks.
(Page 119). Clark also holds that moral language serves as a collaborative medium: “A procedure of multiple, cooperative perspective taking often allows groups of agents to solve problems that would otherwise defeat them.” Expressing ourselves in the form of moral rules allows competing perspectives to be explored as part of the interaction in order to attempt to reach a consensus. Through our moral rules, the (prototype driven) thinking and perspectives of individuals can serve as “objects” of group attention and hence discussion. How else would we navigate the complex social state spaces we encounter as groups of individuals? Clark asserts that the role of linguistic exchange is “paramount.”
The attempts by each party to articulate the basic principles and moral maxims that inform their perspective provide the only real hope of a negotiated solution. Such principles and maxims have their home precisely there: in the attempt to lay out some rough guides and posts that constrain the space to be explored in the search for a cooperative solution. Of course, such a summary rules and principless are themselves negotiable, but they provide the essential starting point of informed moral debate. Their role is to bootstrap us into a kind of simulation of the other’s perspectives, which is . . . the essential fodder of genuine collaborative problem-solving activity.
Clark adds, “It is perhaps unsurprising to learn that collaborative learning emerges at about the same developmental moment (age 6 or seven) as does so-called second-order mental state talk–talk about other people’s perspectives on your own and others’ mental states.” (Page 121)
Clark concludes by reminding us that moral expertise “cannot (for moral reasons) afford to be mute.” Further, moral rules and maxims are not merely tools “for the moral novice.” (Page 124). They enable us to refocus our precious limited attention and they invite public collaboration. Attempted collaboration is the best hope we have in a world of limited resources. It is certainly better than throwing rocks at each other.
The rules and maxims articulated along the way are not themselves the determinants of any solution, nor need we pretend that they reveal the rich structure and nuances of the moral visions of those who articulate them. What they do reveal is, at best, an expertise in constructing the kinds of guides and posts needed to orchestrate a practical solution sensitive to multiple needs and perspectives.