OPR, Ch. 3.7: The Evolution of Rule-Following Punishers

Social cooperation is good—we do better with it than without. But social cooperation depends upon trust—we need to be able to count on others being cooperative and disinclined to cheat, break the rules, take advantage of us, and so on. In the kinds of game-theoretic situations that best model society, cooperation and conformity to useful social rules will form a stable equilibrium provided people possess a strong enough conditional preference for following such rules, i.e., provided they prefer to cooperate with cooperators for its own sake, and provided they prefer for its own sake to follow rules when others follow rules.

Gaus asks, “But how could rational individuals develop an independent ‘preference’ or reason to follow a rule?” (103)  He claims to have shown that individual cannot reason themselves into being devoted to such rules, because such devotion might cause them to follow rules even when doing so does not best promote their values. (I am not convinced by Gaus’s arguments; I’ll say more on this below).  We could just posit that people have a preference for following generally-followed rules, but this is unsatisfying, even if it turns out to be true. (Cf: Some economists explain voter turnout—which seems irrational—by positing that voters just have a preference for voting, much like some people have a preference for playing golf. This is unsatisfying, even if true.)  The preference for conditional rule-following is widespread, so a satisfying account would explain why this is so, rather than leave this as a happy accident of human psychology. To explain this preference, Gaus turns to sociobiology, evolutionary psychology, and related fields.

People do not simply have a preference to cooperate and follow generally-followed social rules. They also have a preference for punishing defectors, even at their personal expense. For an instance, consider the ultimatum game (see here: http://en.wikipedia.org/wiki/Ultimatum_game). If the second player in the ultimatum game had entirely non-tuistic preferences and were indifferent to social rules, we’d expect her to accept whatever money she gets. But, in fact, the second player tends to reject low offers from the first player, thus losing a potential monetary gain. One common explanation for this behavior, and similar behaviors in related games, is that players prefer to punish bad behavior from other players, even at personal expense. (Some economists might be inclined to say that if a player prefers to punish defectors, then by definition punishing defectors is part of that player’s self-interest. I am assuming everyone here understands why that’s a mistake.) When Gaus turns to evolution to explain our preferences for cooperation, he will also explain why the preference to punish is widespread.

Gaus proceeds to summarize a wide range of work on the evolution of cooperation. In particular, he explains why evolution will tend to converge on 1) agents who follow rules and punish defectors, rather than 2) agents who follow rules but never punish defectors, or 3) agents who always defect. I won’t summarize these findings here.

Gaus stresses that rules are important to this evolutionary story. We evolved as rule-following punishers, not just “cooperating punishers” or “altruistic punishers”, as Gaus puts it. (113)  In the evolution of cooperation, it’s important to minimize the cost of punishment. “To punish ill-defined ‘uncooperative’ behavior would result in a great deal of mistaken punishment,” which would make both punishment and cooperation more costly. (113) Rules have utility. I’ll invoke here a familiar analogy to speed limits. There’s no bright red line between too fast and too slow, but it’s useful for the law to create an artificial bright red line. Signs that say, “Speed Limit: 35 mph” help us live together more efficiently and easily than signs that say, “Speed Limit: Take proper care given your car, skill, driving conditions, skill of other drivers, and on,” even though the latter signs are in a sense closer to the truth. This holds for moral rules, not just the law. For morality to serve its function—and recall that Gaus does think morality has a function—it needs to draw some bright red lines.

Commentary and questions:

1.  What kind of justificatory force do these evolutionary stories have, if any?  On one hand, they help to show that morality does tend to have a certain function, and they make it clear that some of our moral preferences are not mere accidents.  (Keep in mind that it’s important for Gaus that morality serve a function.) We can see that these preferences have a point, serve ends that we approve of, and help solve human problems. So, perhaps such stories help morality pass what Christine Korsgaard calls “reflective endorsement.” We reflectively endorse our moral judgments when, upon learning the causal grounds of these judgments, we continue to hold them and are in some sense glad to do so.  In contrast, morality fails reflective endorsement when learning the source of these judgments either alienates us from them, either by making us no longer have them, or by making us wish we didn’t have them.

2.  Still, showing that a system of moral rules is useful is not quite enough to reconcile morality with our self-interest. My self-interest and goals are better served in a world where everyone is a rule-following punisher than world where everyone is a defector. However, even if thanks to evolution I have preferences for cooperating and punishing, I might still wonder if it would be better for me if I, and only I, lacked such preferences, but were really good at faking it and making it seem like I had such preferences.

David Schmidtz says, “A satisfying account of our reasons for altruism will not take our other-regarding preferences as given. Neither is it enough to offer a purely descriptive account of concern and respect—a biological or psychological or sociological account of what causes us to develop concern and respect for others. Biology and psychology are relevant, but they are not enough. We want an account according to which it is rational for us to have other-regarding preferences in the first place.” (Schmidtz, “Reasons for Altruism,” Person, Polis, Planet (New York: OUP, 2008), 64.)

I think Gaus’s argument here would be greatly supplemented by borrowing from his colleague (full-disclosure: and my mentor and one-time co-author). Schmidtz argues in Rational Choice and Moral Agency and elsewhere that developing altruistic preferences, moral preferences, and learning to devote oneself to projects beyond oneself all tend to be good bets. One way of putting Schmidtz’s point: Suppose you were a purely selfish being concerned only with your own happiness and survival. Now suppose you are given the option of taking a pill that will imbue you with altruistic and moral preferences. Given your current preferences, should you take the pill? Let’s put aside Gauthierian worries about punishment and reputation, (i.e., let’s forget about arguments to the effect that you should take the pill so people will trust you and cooperate with you more.)  Schmidtz points out that self-regard is fragile. People can and do stop caring about themselves.  People who care about others, who care about morality, and who can devote themselves to projects and causes tend to be happier, have more to live for, and to have more stable self-regard. And so, Schmidtz argues, for most people, taking the pill would be a good choice—it would have high expected utility.

(Note that Schmidtz isn’t talking about developing a disposition toward constrained maximation. Rather, he’s talking about why acquiring altruistic and self-transcendent preferences can be in a selfish persons’ self-interest. I.e., it can be in your self-interest to transform yourself into a being with different preferences.)

I agree with Schmidtz that evolutionary stories are not enough, and so I think this kind of argument would augment Gaus’s story.  Gaus might be skeptical of this kind of argument (see p. 103, and chapter II), but I didn’t see anything in chapter II that undermined Schmidtz’s argument.

This entry was posted in Books, Posts, Reading Group and tagged , , , , , . Bookmark the permalink.

3 Responses to OPR, Ch. 3.7: The Evolution of Rule-Following Punishers

  1. Mats Volberg says:

    I would like raise a question similar to your first question. It was left unclear to me whether these evolutionary stories and evidence is supposed to be descriptive or normative. Gaus seems to acknowledge that communities do have this norm of rule following (page 104) thus one could ask: how did it happen that rule following became the norm; but one could also ask why should we keep this norm.

    It seems as if Gaus is offering a descriptive account (“although rationality cannot explain this uniquely human characteristic, an evolutionary account can do so” page 104). But that would not we very interesting and helpful, since I assume that he eventually wants to offer a normative claim of why we ought to follow rules.

    Have I not read carefully enough and missed some passage where he explains all this? Or what?

  2. Jerry will speak to the is-ought worry, but just to say a bit about it here:

    Those of us on the reading group list who are at Arizona have found that Jerry is skeptical of the strong distinction philosophers draw between “oughts” and “is’s”. He’s basically got a quasi-Hegelian theory of normativity where history gives us our starting point for normative evaluation and that only then can we develop a critical method of evaluation for the practices we have. We simply cannot get normative traction on our system of norms as a whole but only as a departure point from our present practices. It’s a bit like the idea of critical justification in Rex Martin’s A System of Rights. Jerry’s Hayekianism also shines through here because he thinks that we cannot rationally reconstruct the rationale for all of our practices and that, as a result, the public justification principle is a means of testing certain parts of our practices and amending or strengthening them.

  3. I have found Jerry’s scepticism of contemporary distinctions like those mentioned above refreshing. And I agree with him that rules or norms are what we need to be looking at. But at the end of the day — in the next chs. — we’ll be expecting a story as to why we have reasons for action, and reasons of the right kind, with respect to these rules. In his criticisms of Gauthier et al., Jerry has ruled out making use of certain strategies we might use to make sense of the reasons rules give us.

Leave a Reply