On Gaus section 19
I should preface these remarks with the proviso that I am simply a guest blogger for this section, filling in for someone who dropped out, and have been unable to follow the earlier discussion in the online reading group. For that reason, I am not as intimately familiar with the rest of the book as most of the other participants, so I fear that my remarks will reflect my poor grasp of the overall architecture of this most intriguing, but often forbidding, book. I also apologize in advance if I raise any issues that have already been thoroughly hashed out in earlier discussion.
As I read it, section 19 attempts to lay down one of the foundation stones for GG’s larger effort to reconcile two apparently opposed ways of thinking about the authority of moral rules:
(1) the ‘instrumentalist’ (Hobbesian, Humean, Gauthierian) view that ‘social morality is necessary for human cooperation and social life’ and
(2) the ‘deontological’ (Rousseau, Kant, Strawson, Rawls, Darwall) view that moral requirements are irreducibly constituted by relations among agents who recognize their mutual standing as free and equal persons.
Earlier in the book, GG has said that ‘both are correct’ (193): social morality is both a ‘device’ of social coordination and a body of rules deriving its authority from its consistency with respect for the freedom and equality of all agents. In this section, GG starts to explain how they are integrated and, moreover, why both are necessary. According to GG, taking (1) more seriously than contemporary Kantians often do is the key to overcoming the threat of ‘indeterminacy’ that hangs over the public reason idea.
The ‘indeterminacy’ involved arises because there are, in principle, many alternative sets of moral rules that are consistent with the ‘rights of agency’ and the ‘abstract’ idea of ‘jurisdictional rights’ GG has defended in earlier chapters. Even if these general entitlements can be publicly justified, agents must still settle on a scheme of rules that all can regard as having requisite moral authority. Without such a settlement, it will be impossible to reach agreement on how exactly the more general entitlements of ‘free and equal’ persons should be interpreted in particular cases. Each of these more specific schemes of moral expectations is publicly justified, yet so far no one has sufficient reason to accept any of them as uniquely publicly justified.
One is tempted to suggest here that a uniquely publicly justified scheme can be identified only if it is selected by a collective decision rule that is itself publicly justified. The main point of this section is to deny that this ‘Procedural Justification Requirement’ (392) is necessary. This is good news, according to GG, because he appears to believe that that requirement is impossible to satisfy without resorting to highly artificial – and hence reasonably rejectable – redescriptions of the choice situation (as with Rawls’s Original Position).
In the body of the section, GG attempts to explain how it is possible for a uniquely justified set of social/moral rules to emerge automatically through interaction between agents who are at all times acting only on reasons that reflect their own commitments. To establish the possibility of such a solution, GG relies on a series of game-theoretic coordination models. These are intended to illustrate how the bare, even random, fact of convergence (within iterated interaction) on one of a pair of alternative moral schemes can be (1) an equilibrium solution and (2) in large N-person cases generate a bandwagon effect. As a result of iterated interaction, players in these games find themselves in situations in which they acquire sufficient reason to accept schemes of rules just because others have already opted for them; as more and more do so, we reach a point at which everyone has sufficient reason to go along with the option around which convergence is occurring.
Three features of the models deserve stress: first, the effect they illustrate holds even if players’ initial preferences, before interaction occurs, were for schemes of rules other than those that eventually triumph. While I may have initially preferred some scheme of rules x, an interactive convergence around y can become so powerful a consideration that holding out for x becomes self-evidently unreasonable (from within my own evaluative point of view).
Second, at no point is any collective decision rule mobilized: the solution is an emergent property of independent interactive activity. This is important because GG is at pains to deny that these games are in any way formally analogous to the Procedural Justification Requirement. Thus it is not the case, for GG, that the role played by the elements of randomness that enter into his games reflects any collective decision to let the outcome be determined by a random procedure. The trajectory of the interaction within the games is not itself ‘publicly justified’, yet the outcome, because it derives from the interaction of the players themselves, is one that they have sufficient reason to support given their own evaluative standpoints. Thus it selects from with the ‘eligible’ set of publicly justifiable sets of rules without relying on any collective choice rule.
Third, the models are neither predictive nor descriptive — they do not purport to capture some actual process of moral evolution (the effort to map the analysis more closely onto cases of actual moral evolution will presumably come later). Rather, they illustrate the logical possibility of a solution that obviates the Procedural Justification Requirement.
On the strength of these examples, GG derives a striking (and doubtless controversial) conclusion. He says that they exemplify cases in which, contrary to many criticisms of classical social contract theory, the ‘consent of an overwhelming majority does have the power to bind all the rest’ (403). This immediately raises worries about fairness, to which GG responds in the final part of the section.
The way in which GG uses these examples also illuminates the way in which he proposes to integrate resources from the instrumentalist and deontological views I mentioned at the outset. GG mobilizes the sorts of considerations we would often associate with the Humean tradition (taking seriously the idea that rules and social conventions derive some of their normative force from the way in which they promote cooperation and coordination and come to be understood as such over time) to overcome the indeterminacy he claims more strictly Kantian approaches leave unaddressed.
1. How far does establishing the logical possibility of such a solution on the basis of the sort of game-theoretic models mobilized here really get us? I am not sure that it is newsworthy that groups of agents can interact such that the indeterminacy of public reason is overcome by means of independent coordination: has any one denied this? Surely most of the controversy is about whether (and under what actual conditions) we can count on it happening. Moreover, the coordination games are highly artificial. I worry, in particular, about the fact that they all involve pairwise comparisons of only two alternative resolutions. Can models that are so restricted adequately capture the conditions likely to arise once public reasoners start to ‘interact’ in the ‘area of social life’ they were deliberating about when they abandon the effort to resolve the issue collectively (393)? Wouldn’t ‘evaluative diversity’ produce a much wider range of alternatives, and won’t more realistic models in which players express preferences over more than two alternatives be less effective in overcoming indeterminacy?
2. A related concern emerges when we recall the central comparison around which this section pivots: that between GG’s game-theoretic interactions and the sorts of ‘collective’ choice modeled by such thought-experiments as Rawls’s Original Position. GG is right, I think, to see a formal difference between these two arguments in that the latter requires a genuine collective decision whereas the former does not. But while (in this respect) the players in GG’s games are doing something different from (say) individuals in the Original Position, the relation between we, the readers of GG’s and John Rawls’s books, and the idealized persons populating both sets of thought-experiments does not seem very different. In both cases, a philosopher is offering us a highly stylized set of thought-experiments to illustrate how indeterminacies that might otherwise plague us can be resolved. GG complains that Rawls’s procedure resolves them by ‘bracketing’ or ‘eliminating’ evaluative diversity (392, 393), so that ‘the reasoning of Members of the Public is identical’ (392). But while it may not go quite that far, GG’s procedure is illustrated by means of a thought-experiment in which agents choose between only two alternatives; I am not sure why this significantly improves on Rawls’s effort to take ‘evaluative diversity’ seriously (particularly when Rawls, of course, offers several arguments purporting to show that we can meet the Procedural Justification Requirement by means of an Original Position argument). Rawls may be wrong here, but when all this is viewed in context, the intended contrast between GG’s proposal and its Rawlsian competitor becomes, to my mind, less than decisive.
3. There is a bothersome ambiguity, worth probing, that runs through much of the analysis, between:
‘P has sufficient reasons to adopt moral rule x when selecting alternative possible moral rules by whose authority a society might be ordered’ and
‘Q has sufficient reasons to act on rule x once it has been selected as a uniquely justified authoritative moral rule’
(The slippage is particularly clear in the last para on the bottom of p. 392, but it crops up elsewhere as well, e.g. 402-3)
The difference between these may seem slight, but it is important. In adopting a moral rule from a pool of candidate rules each of which could become authoritative for everyone, but is not yet, I endorse that rule, rather than its competitors, as worthy to serve as morally authoritative. In acting on a rule purporting to have authority over a persons’ actions (or criticizing, resenting, being indignant at someone who is failing to act on it when they should), I presuppose that it has that authority (otherwise I wouldn’t be acting on the rule, or criticizing those who violate it, but rather simply in conformity to it, perhaps for some other set of reasons – convenience, prudence, etc.). GG’s coordination games seem mainly to be modeling the former; they show that I can have sufficient reasons to endorse a rule as authoritative when a sufficient level of social convergence naturally develops around that rule. But often he seems to infer from this that I therefore always have sufficient reasons to act on that rule when I am subsequently faced with a decision falling under that rule. I am not sure this follows.
I suspect GG thinks it follows because he assumes that the ‘social convergence’ phenomenon includes the ‘internalization’ of the selected moral rules. By endorsing the rule as authoritative, I (as it were) automatically will that I internalize that rule, such that I accept that I always have sufficient reason to act on it and can anticipate that any dissonance between my ‘evaluative standpoint’ and the demands of the internalized rule will be within tolerable limits. But this assumption strikes me as illicit and potentially question-begging: what there is convergence upon in the interactive games GG describes is the idea that rule X is morally authoritative within a society. There is not yet, or necessarily, convergence upon the idea that members of that society always have sufficient reasons to act on X. And surely it is exactly at this point that the problem of reconciling the authority of moral rules with diverse ‘evaluative standpoints’ emerges. For, knowing that some alternative set of moral rules might have been selected from the eligible set, and that some of those alternative better represent my original ‘evaluative standpoint’, I might well conclude that I do not have sufficient reasons to act on a rule demanding that I act contrary to my evaluative standpoint in some particular. This remains true, I think, even if that rule is one that I would have had sufficient reasons to adopt as authoritative under the conditions represented in GG’s coordination games.
4. (Probably the same point put differently) I am not sure it makes sense to speak of having ‘reasons to internalize’ (400) a rule, as if the decision to internalize or not internalize is some sort of action one is free to perform or not perform. As GG says on 409, ‘Normative ethical theory is not, as it were, constructed from scratch. … We have been formed in a moral order, our standards reflect such a moral order’. ‘Internalization’ seems to involve something like this, but it seems very odd to say that this is the sort of thing that one can have reason to ‘do’; rather it is something that happens to one in the course of socialization. To be sure, we can have (a) sufficient reasons to select certain alternative schemes of rules in the sorts of co-ordination games GG asks us to imagine in this section, knowing that we will subsequently internalize them through being ‘formed in a moral order’; and (b) sufficient reasons to act on or choose not to act on those rules in specific cases. But again, I am skeptical that (a)-type reasons can be equated with (b)-type reasons. I worry that speaking of ‘reasons to internalize’ smudges the contrast between them, and allows GG to jump too quickly from the claim that ‘I have sufficient reason to select X as a rule all should internalize’ to the conclusion that hence ‘I always have sufficient reason to act on X’.
5. The text leaves me confused about the exact mechanisms within the analysis that make it the case that I can be bound by authoritative moral rules just because an overwhelming majority has committed to such rules. In particular, there is, I think, no very clear account of the basis for GG’s ‘increasing returns’ argument (398ff). Often, this is because it is buried within the technical details of the coordination models (e.g. pp. 396-7). But it is also because it seems to turn on several assumptions that are introduced in an ad hoc and unmotivated way. For example, GG stipulates pp. 398-9, that each Member of the Public ‘has’ ‘two distinct morality-related desiderata: (i) to act on the moral requirement that best satisfies her evaluative standards and (ii) to act on moral requirements that are embraced by others, so that in her interactions she can make moral demands that respect their equality and moral freedom’. Apart from the fact that I don’t know what it means for a person to ‘have’ a ‘desideratum’ to ‘act’ (is it here equivalent to ‘having a desire to act on …’? ‘Expecting that agents act on…’? ‘Being moved to act on…’?), I don’t know where (ii) comes from. It is plainly doing quite a bit of work in the ‘increasing returns’ argument, but it is not explained or motivated. Perhaps GG defends this feature of Members of the Public earlier in the book?