«Explicit and Implicit Incentives for Multiple Agents (Preliminary version) Jonathan Glover Tepper School of Business Carnegie Mellon ...»
Acknowledgments. I thank Stefan Reichelstein for the invitation to write this paper and Xu Jiang for
helpful comments. This paper is based largely on my previous research. I thank my co-authors of the
related papers: Tim Baldenius, Joel Demski, John Fellingham, Jack Hughes, Yuji Ijiri, Carolyn Levine,
Pierre Jinghong Liang, Uday Rajan, K. Sivaramakrishnan, Shyam Sunder, Hao Xue, Richard Young, and particularly Anil Arya. I also thank David Schmeidler for his excellent course on mechanism design and subsequent discussions on “simpler” mechanisms and to Matthew Jackson for his encouragement with that work.
Explicit and Implicit Incentives for Multiple Agents February 2012 Abstract. This paper presents research on three themes related to multiagent incentives.
The goal of all three approaches is to find theories that better explains observed institutions (mechanisms) than the standard approach, which relies entirely on explicit incentives and produces optimal mechanisms that seem overly fine-tuned to the environment.
1 Introduction Mechanism theory treats institutions as games designed to evoke particular play (honest reporting, income smoothing, audit effort, cooperation in teams, etc.). In accounting, research on mechanism design has focused largely on principal-agent models. In the standard principal-single agent model, the principal offers the agent an incentive contract and then leaves the agent with what is essentially a decision problem. Yet, even simple principal-agent models can produce complicated optimal contracts. For example, an important result from moral hazard models with a risk averse agent is that all informative variables will be incorporated into the optimal contract (Holmstrom, 1979). Holmstrom’s result provided a theory of relative performance evaluation (Antle and Smith, 1986) and refined our traditional notion of controllability to “conditional controllability” (Antle and Demski, 1988). The broader information content school of accounting theory has developed a better (more nuanced) understanding of a wide variety of managerial and financial accounting practices (Demski, 2010; Christensen and Demski, 2003).
With a large number of informative variables, which seem inevitable in practice, the optimal contract Holmstrom predicts would be overwhelmingly complex. Even when there is only a single variable to contract on, the optimal contact can be extremely sensitive to the underlying details of the environment. For example, when a risk-neutral agent is subject to moral hazard, the optimal contract can have the principal making a very large payment to the agent with a very small probability. If the probabilities are different than assumed, the principal may end up paying the agent much more than is required or fail to motivate the agent to take the action she intends. Real-world incentive contracts seem less fine-tuned to the environment.
When a principal contracts with multiple agents, even more extreme results emerge. For example, in multiagent models of capital budgeting with risk-neutral agents who operate in correlated cost environments, the optimal contract prescribes some payments that are arbitrarily large and others that are arbitrarily small (negative) as the correlation becomes small. These arbitrarily large and small payments allow the principal to extract all of the agents’ information rents and obtain the first-best solution as long as there is any correlation in project returns. The ease of achieving first-best payoffs and the knife-edged nature of the optimal contract make it suspect as an explanation of anything we see in practice (Cremer and McLean, 1988).
With risk-averse agents, the contract is less knife-edged in response to the risk premium associated with imposing risk on the agents but presents another problem. The optimal Bayes-Nash incentive compatible contract (the second-best solution) typically creates multiple equilibria in the agents’ subgame, and the agents may find tacitly colluding on an equilibrium other than the one the principal intends them to play appealing (Demski and Sappington, 1984). That is, the second-best solution may induce excessive (from the principal’s perspective) coordination.
The mechanism design literature tells us we can often augment the second-best solution by adding additional messages in a way that eliminates the unwanted equilibria without creating new equilibria or changing the equilibrium payoffs (e.g., Ma, Moore, and Turnbull, 1988; Mookherjee and Reichelstein, 1990). These augmented mechanisms are typically quite complex, for example, employing infinite message spaces when the underlying type space is binary. The mechanisms often exploit a weakness of the Nash equilibrium concept—that best responses are not always well defined. Arguably, these (tail-chasing) mechanisms without well-defined best responses are of limited use in understanding actual institutions (Jackson, 1992). The focus is on what can and cannot be implemented, not the form of the implementing mechanisms.
This paper presents research on three themes related to multiagent incentives.
The goal is to find theories that better explain/help us understand observed institutions than the standard approach. The paper surveys existing research and presents a few new results. Rather than presenting each of the models employed in the existing papers, two basic models are used—one for adverse selection and one for moral hazard. The goal is to present the results as simply as possible.
The paper starts with the role of explicit incentives designed by a principal who wants to ensure the equilibrium she desires the agents to play is unique. The paper presents simpler mechanisms than those usually used to eliminate unwanted equilibria (Glover, 1994; Arya, Glover, and Young, 1995; Arya, Glover, and Rajan, 2000). These simpler mechanisms assume less demanding behavioral assumptions—two rounds of iteratively removing strictly dominated strategies—and employ smaller message spaces than the standard ones but achieve only approximate implementation of the second-best solution. Hence, the approach represents an arbitrarily small deviation from the standard approach of searching for institutions in optimality. As examples of practices that resemble the mechanisms, budgeting (and budget padding in particular) in adverse selection (Arya and Glover, 1996) and management forecasts in moral hazard (Arya and Glover, 1995) can be viewed as providing opportunities for off-equilibrium confessions.
If the agents have complete information, an extremely permissive result is obtained. Even the first-best solution can be exactly implemented via two rounds of iteratively removing strictly dominated strategies in a general principal-multiagent model of adverse selection (Arya, Glover, and Rajan, 2000). The approach relies on the reports of other agents in determining any one agent’s equilibrium allocation, while providing each agent with the opportunity to challenge what others say about him. A challenge is appealing to the agent if and only if other agents are lying about him. Put in terms familiar to accountants, when multiple managers or a manager and an auditor can both verify that something is true (e.g., the historical cost of an asset), that information should be relatively easy to contract on.
Returning to the incomplete information case, an interesting endogenous source of correlation in the agents’ environments is a potential bailout by a principal that is more likely when preliminary signals indicate both agents’ projects are likely to fail (Arya and Glover, 2006). Allan Meltzer and others have warned for many years about the moral hazard bailouts create in a variety of settings, including banking and monetary policy.
The point here is subtler. Even if the principal takes the bailout possibility and the resultant individual moral hazard into account in designing incentives, the bailout opportunity can create an opportunity for unwanted coordinated moral hazard. Agents may purposefully coordinate their actions because it increases their chances of being bailed out. I also speculate on implications for financial reporting (e.g., the impact of fair value measurements).
The second approach the paper takes is to focus on the simplicity of the secondbest solution itself by requiring that the mechanism be robust to the agents knowing more about the environment than the principal does. That is, the mechanism must be efficient for a “wide class of environments” (Wilson, 1987), formulated as an expectation across that wide class of environments without allowing the mechanism to be fine-tuned to a particular environment. These robust mechanisms help rationalize, for example, secondprice auctions (Arya, Demski, Glover, and Liang, 2009) and the mixed empirical evidence on relative performance evaluation at the executive level. The auction model can be interpreted as a capital budgeting setting with mutually exclusive projects in which relative project ranking emerges as a solution to the robustness problem. Secondprice auctions provide dominant strategy incentives. Hence, the result provides a BayesNash foundation for dominant strategy mechanisms. Other approaches to the robustness problem (e.g., Bergemann and Morris, 2005a,b) take larger departures from the Bayesian framework. The important point in Arya, Demski, Glover, and Liang (2009) is that the optimal contracts are qualitatively similar to the standard ones when robustness is a small concern but are qualitatively different from the standard ones when robustness is a large concern, although even a small robustness consideration can convert a non-unique solution into a unique one.
Casting the firm as a principal-multiple agent model in which the principal provides all incentives via explicit contracts is, at best, an abstraction of a broader relationship. As Sunder (1997) writes, the firm is “an arena in which self-motivated economic agents play by mutually agreed upon or implied rules to achieve their respective objectives.” The comparative advantage of a firm over other institutional arrangements is in enforcing implicit contracts, since the courts can enforce explicit contracts. Multiple equilibria (in the continuation game) are an essential part of implicit contracts, since self-enforcing punishments agents can impose on each other are needed to make their implicit promises credible. That is, instead of viewing multiple equilibria as undesirable, the principal can use multiple equilibria to her advantage.
The third approach this paper studies is to incorporate repeated play and implicit (relational) contracting among the agents and between the agents and principal.
Relational contracting among the agents (mutual monitoring) creates a role for joint performance evaluation, even when the joint performance measure is an aggregation of individual performances measures that could be contracted on individually (Arya, Fellingham, and Glover, 1997).
When the agents jointly produce a team output, whether the agents’ actions are strategic complements or strategic substitutes is important. A strategic substitutability limits the gains to mutual monitoring and cooperation, because the agents are tempted to collude on taking turns doing the work (think of group projects in a classroom setting) if the incentives are too low-powered and the relationship is repeated indefinitely.
With a two-fold repetition of the relationship, mutual monitoring is still optimal in the first period if multiple equilibria are created in the second period that the agents can use to enforce cooperation in the first period. In the two-period setting, the turn-taking collusion problem does not arise. The second-period incentives must be high-powered enough to provide individual incentives for working in the final period.
Mutual monitoring can be viewed as a natural (more cooperative and more decentralized) alternative to the ratting mechanisms discussed earlier. The implicit contracting approach casts accounting as a means of setting the stage for implicit contracting rather than an all encompassing source of information and contracting.
Earlier approaches to mutual monitoring (e.g., Itoh, 1993) abstracted from the repeated relationships that are used to enforce implicit contracts. Instead, Itoh and others assumed agents can write explicit side-contracts with each other, often describing explicit sidecontracting as an abstraction of a repeated relationship. The implicit contracting approach is relatively under-explored (particularly finitely repeated implicit contracts) and has the potential to yield new insights about observed practices (e.g., the evolution of incentives over a manager’s tenure).
When the principal also uses implicit contracts, bonus pools emerge in response to the principal’s limited ability to make commitments but only as an extreme form of the optimal relational contract (Glover and Xue, 2011). In addition to the usual importance of discounting (patience), whether the agents’ actions are strategic complements or strategic substitutes turns out to be key in a team production setting (Baldenius and Glover, 2011). With individual production technologies, the advantages of particular compensation schemes—relative performance evaluation, joint performance evaluation, and individual performance evaluation—depend on the opportunities they create for both wanted coordination (cooperation) and unwanted coordination (collusion), because of the strategic payoff substitutability, complementarity, or independence they create (Glover and Xue, 2011).
The remainder of the paper is organized into three sections. Section 2 studies adverse selection, and Section 3 studies moral hazard. Section 4 concludes.
2 Adverse Selection