• No results found

A Model of Cooperation and the Production of Social Norms

In document Law and Social Norms (Page 22-47)

Suppose that a firm seeks an employee for a one-year term, and anticipates that the employee will receive training for the first six months and use his skills to generate profits for the firm during the last six months. During the period of training, the firm will suffer a net loss that it hopes to recover during the period of work. The firm’s problem is that if it pays the worker a monthly wage, the worker might quit just when the training is completed, and take himself and his skills to another firm that is willing to pay the higher wage that can be demanded by a skilled worker. The first firm would then either have to hire another, already trained, worker at that higher wage, or pay the existing worker a premium not to leave, in either case losing the return on its investment. Anticipating this problem, the firm does not hire the worker.

Alternatively, the firm might offer to hire the worker on the condition that he receive his entire compensation at the end of the term. Under this arrange-ment, the worker would not quit after receiving training but before doing work, because then the firm would refuse to compensate him. But the worker might not agree to this arrangement, because he has daily expenses that must be paid from his salary and, moreover, he does not trust the firm to keep its end of the bargain. Instead, the worker might simply promise not to quit prior to the end of his term. But the firm has no reason to believe that the worker will keep his promise. To be sure, many people are honest and will keep their word, but many people are dishonest or adept at rationalizing their actions. Because the firm does not know whether the worker is honest or not, it declines to hire him. A third alternative is for the worker to sign a contract in which he promises not to quit. But the firm knows that courts usually

re-fuse to force people to work by threatening them with jail or other punish-ments and that, although the firm might be entitled to damages from the worker if the worker quits, it is likely that the worker will be judgment-proof and that the costs of legal proceedings will exceed the damages. The offer to sign a contract therefore would not persuade the firm to hire the worker.

The worker might make the following argument. If he quits at the end of six months, he will do so only in order to obtain a better paying job from an-other firm. The second firm, however, is likely to know that the worker breached his contract with the first firm. The second firm thus may believe that the worker is unreliable, a person who does not keep his promises. Be-cause the second firm does not want to entrust its valuable assets to a worker who is unreliable, it would not employ such a worker. But if the worker can-not obtain a higher-paying job from a second firm, then he will can-not breach his contract with the first firm. If the first firm is persuaded by this logic, which depends on, among other things, a network of communication among firms, it will hire the worker—and they may even forgo the formalities of a contract, since they do not depend on the courts to enforce the agreement.

This story contains several familiar lessons. First, it illustrates an old lesson about the problem of cooperation. Even though two people would obtain mutual gains by cooperating, they will not be able to cooperate if they must act simultaneously. This problem is called the prisoner’s dilemma or the col-lective action problem. The story actually illustrates a one-sided version of this problem: the firm and the worker would obtain gains if they cooperate, but because the worker does even better by cheating after the firm commits it-self, the firm will not commit itself in the first place.

Second, the story suggests but also casts doubt on a possible solution to the problem of collective action. This solution is the government. If Person 1 and Person 2 enter a contract and expect that the state will sufficiently punish any-one who breaks the contract, then the expected cost of cheating will be higher than the expected cost of cooperation. The parties will cooperate; the pris-oner’s dilemma is solved. But, as the story shows, legal proceedings are costly and clumsy, so people cannot rely on the law to solve day-to-day cooperative problems. Indeed, that people do not rely on the law to solve day-to-day co-operative problems is clear from both formal research (for example, Macaulay 1963, Ellickson 1991, Bernstein 1992) and casual empiricism. But if the gov-ernment does not solve these problems, what does?

Third, the story suggests a tentative answer to this question. Sometimes people keep their promises not because they fear being sued, but because they fear developing a bad reputation. If they develop a bad reputation, it will be harder for them to find work and to obtain other good things in the future. So

if they value future gains sufficiently, they will be deterred from cheating in the present.

Fourth, the story brings out the problem of character and private informa-tion. If the worker is honest, the firm may be able to trust him, but the firm has no way of confirming the worker’s claim that he is honest, and the worker has no way of reliably signaling his honesty to the firm. Other kinds of private information, including the extent to which the worker cares about his well-being in the future, as opposed to the present, raise similar problems of verification and signaling. Although the firm and the worker in our story end up working around these problems, the pressure to signal that one has a desir-able character leads in other contexts to conformist behavior and the genera-tion of social norms.

Consider some ways in which the worker can reduce the chances that he will be hired by the firm. If he shows up at the interview wearing dirty or in-appropriate clothes; if he speaks in an impolite or disrespectful or insuf-ficiently deferential way; if he includes in his resume falsehoods, even trivial falsehoods, that are detected; if he expresses weird opinions even if unrelated to the position, or if he expresses any opinions that are not germane to the po-sition; if he discloses his membership in a group that espouses idiosyncratic beliefs—if he does any of these things, he will reduce his chances of success despite the tenuousness of the connection between these actions and the re-quirements of the position for which he applies. Failure to conform to rele-vant social norms raises suspicions about one’s character and reliability in rela-tionships of trust, even when there is no direct relationship between the deviant behavior and the requirements of the job.

鵹 The strategic difficulty faced by the firm and the worker is an instance of the general problem of cooperation. The problem is that although coopera-tion produces mutual gains for those who participate, in the absence of an en-forcement mechanism cooperation will not be possible. I use the term “en-forcement mechanism” in a special sense. A person who is rational in the economist’s most stripped-down version of that term is a person who has complete, consistent, and transitive preferences over a range of options. In certain circumstances that person cannot cooperate with other people. When one relaxes some of the assumptions about human motivation, or one fills in some institutional detail, cooperation becomes possible. When it does, the new characteristics of motivation or details of institution that enable this co-operation constitute the “enforcement mechanism.”

In order to be perfectly clear about this point, let me return to the pris-oner’s dilemma. The following table provides the payoffs of the game, played

by Row and Column, each of whom can choose between two strategies, Co-operate and Defect.

Cooperate Defect

Cooperate 2, 2 0, 3

Defect 3, 0 1, 1

Row thinks in the following way: “Suppose Column plans to cooperate. Then if I cooperate I obtain 2, whereas if I defect, I obtain 3. Suppose Column plans to defect. Then if I cooperate I obtain 0, whereas if I defect, I obtain 1.

So whether or not Column cooperates, I always do better if I defect. There-fore, I will defect.” Column reasons similarly, and both players defect. Each would do better if both did not defect, but the optimal outcome does not pre-vail. The conclusion can be extended to cases involving an indefinitely large number of people who try to cooperate for the purpose of obtaining mutual gains—for example, a family or a union or a political party.

The significance of the prisoner’s dilemma has been obscured by some quarrels about the extent of its applicability. First, experiments and intuition suggest that people do not invariably jointly defect in prisoner’s dilemmas (Kagel and Roth 1995). But experiments and intuition also suggest that peo-ple do not invariably cooperate in prisoner’s dilemmas. The point of this thought experiment is to lay bare the incentives that interfere with the perfect cooperation that would turn our flawed world into a utopian one. Having laid bare those incentives, one can proceed to investigate the mechanisms that enable cooperation where it would not otherwise exist.

Second, it is sometimes argued that the highly stylized conditions of the prisoner’s dilemma rarely exist in the world. Notice that the terms of the thought experiment forbid the players to communicate in advance or to interact after the game ends, but in reality these conditions are generally vio-lated. Some commentators argue that coordination games (more on this later) are more common than prisoner’s dilemmas. But every time one wonders whether one can get away with not reporting income for tax purposes, litter-ing on the street, smoklitter-ing in a crowded room, drivlitter-ing after a few glasses of wine, gossiping behind the back of a friend, betraying an institution, or breaking a promise, one feels the pull of the logic behind the prisoner’s di-lemma. That one may not do some or all of these things is due to the exis-tence of enforcement mechanisms, to which I now turn my attention.

Some enforcement mechanisms are institutional and others are psychologi-cal or physiologipsychologi-cal. The latter will be discussed in the next chapter. Institu-tional enforcement mechanisms can be divided into legal and nonlegal

mech-anisms. The law encourages cooperation in many ways. For example, it allows people to enter contracts and obtain damages from anyone who breaches a contract. Suppose Row and Column enter a contract. If Row subsequently defects, the state will penalize him with a sanction of 2. Now Row does better if he cooperates than if he defects, regardless of whether Column cooperates.

If Column faces the same set of incentives, both players cooperate and the op-timal outcome is achieved.

This story is too easy for the reasons suggested earlier. The government is a clumsy tool. Police officers, prosecutors, judges, and juries generally can ob-tain only a crude, third-hand account of events. Lawsuits are expensive. If the court system cannot distinguish cooperation from defection with any accu-racy, and it is costly to use, people will not rely on it for ensuring cooperation.

Indeed, most people do not know much about the law, do not allow what they do know about it to influence much in their relations with other people, and do not sue each other when they have disputes.

An explanation for nonlegal cooperation begins with the observation that people who defect suffer injury to their reputations. If a person develops a bad reputation, then people will not cooperate with him in the future. Since coop-eration is valuable, if a person cares enough about the future he will not defect in the present. But what is a reputation, and how does it get created?

The Repeated Game

Suppose that Row and Column expect to play the game in Figure 1 for an indefinitely long period of time. By this, I mean that there is a low probability at any round that an exogenous shift in circumstances would cause Row or Column to prefer an outside opportunity to the payoff from continued coop-eration. Row and Column might, for example, be a buyer and a seller who have jointly invested in assets specific to the relationship, and expect that in the normal run of things the buyer will not stop making orders. Suppose that each player expects that if he cheats in one round, the other player will re-spond by cheating in the following round. Then as long as each player cares enough about his payoffs in future rounds—that is, he has a low discount rate—he will cooperate rather than defect in each round. As a quick example, suppose that Row’s discount rate is 0. If he cheats in round 1, then Column will defect in round 2. If Row cheats in round 2, Column will defect in round 3 as well. Suppose, then, Row cheats in three rounds: then, he will obtain a total payoff of 5 (3⫹ 1 ⫹ 1). If he cheats in the first round, cooperates in the second, and cooperates in the third, he obtains a total payoff of 5 (3⫹ 0 ⫹ 2). If he cooperates in all three rounds, he obtains a total payoff of 6 (2⫹ 2 ⫹ 2), so cooperation is the best move. To be sure, if he cooperates in the first two

rounds and cheats in the third, he will obtain a payoff of 7, but then one must extend the analysis to later rounds, and continuing with the same logic shows that the optimal move is always to cooperate. The result can be seen most eas-ily by comparing the stream of payoffs that Row or Column would expect.

Cheat on the first round and thereafter, then he can expect to receive {3, 1, 1, 1, . . . }; cooperate on the first round, then at best he can expect to receive {2, 2, 2, 2, . . . }.

The logic extends to games involving more than two players. One should distinguish n-person games in which players randomly meet in pairwise en-counters, either cooperate or defect, then move on to other encounters with different people within the group; and games in which everyone must cooper-ate for the production of some collective good. An example of the first is a sales contract between two merchants who belong to a trade association: each merchant expects the next contract to be with some other merchant. An ex-ample of the second is the cleanliness of the community in which no one lit-ters or the safety of a community in which everyone contributes to defense.

But the logic of the two examples is the same. As long as everyone has a low discount rate, has sufficient information about the past activities of everyone else, and adopts a sufficiently cooperative strategy, cooperation is possible.

This argument is well-known because it shows that the prisoner’s dilemma does not necessarily defeat cooperation. It shows that a few reasonable as-sumptions about the context in which cooperation occurs converts a pris-oner’s dilemma into a game in which cooperation is a natural outcome. I say that these assumptions are reasonable because they accord with intuition and experience: people are more likely to cooperate when they expect to have re-peated dealings with each other than when they expect never to see each other again. And we will see that many institutions, both government-created and privately created, enhance cooperation by replacing the conditions of the pris-oner’s dilemma with conditions that promote cooperation, in particular, con-ditions that facilitate the development of reputation. In this model, reputa-tion refers to people’s beliefs about one’s history as a cooperator or defector.

This argument, however, suffers from several problems. First, it assumes that the parties correctly interpret each other’s actions. This is not always the case. In two-person games Row and Column might agree on a certain quality level for Column’s products, then disagree about the quality of the products that Column delivers. As the number of players rises, it becomes decreasingly likely that informal observation, learning, and gossip will be sufficient to keep track of the defectors. Although perfect information is not necessary for par-tial and even complete cooperation—outcomes might be reliable proxies for moves, error may cause only partial defection rather than complete unravel-ing—the demands on information are high.1

Second, the argument assumes that players will choose the proper strate-gies, and believe that others will choose the proper strategies. If Row believes (correctly or not) that Column will always defect, then Row will always fect. Even if Column chooses a relatively cooperative strategy that requires de-fection only for one round after Row defects, Row’s strategy will result in both parties always defecting. As P. T. Barnum said of a country store that he man-aged, “The customers cheated us in their fabrics, we cheated the customers with our goods. Each party expected to be cheated, if it was possible” (quoted in Greenberg 1996, p. 11). Computer studies suggest that the optimal strat-egy might be to defect only after a pair of defections, as a way of reducing the variance caused by noise (Wu and Axelrod 1995), but in any context there will be an indefinitely large set of strategies that might seem reasonable to a player. In n-person games it is necessary that (1) parties either communicate among themselves (with little error) the outcomes of rounds or observe them directly, (2) parties remember a person’s history as a cooperator and defector, and (3) parties adopt extreme strategies, such as defecting perpetually after observing a single defection (Kandori 1992). It is not clear that one should expect players to realize that these strategies are appropriate; indeed, they do not appear to correspond to real world behavior. The general point is that the models provide a case for the possibility of cooperation (in the face of the prisoner’s dilemma’s case for the impossibility of cooperation), but do not guarantee that cooperation will occur or even that the likelihood of coopera-tion is high.

Third, as mentioned, the argument assumes that the parties have suf-ficiently low discount rates, when, in fact, some people may have high dis-count rates. People with high disdis-count rates will not cooperate or at best will achieve a low level of cooperation, depending on the situation. People with low discount rates might not be able to cooperate even with each other, if they

Third, as mentioned, the argument assumes that the parties have suf-ficiently low discount rates, when, in fact, some people may have high dis-count rates. People with high disdis-count rates will not cooperate or at best will achieve a low level of cooperation, depending on the situation. People with low discount rates might not be able to cooperate even with each other, if they

In document Law and Social Norms (Page 22-47)