The problem I am going to discuss here has many names. In game theory, it is known as the principal-agent problem. In the theory of contracts, it is known as incentive compatibility. In the realm of artificial intelligence, it is known as the alignment problem.
As a manager, I want to get the best performance of employees for any given level of compensation. In theory, the way to do this is to set up a system in which bonuses, pay raises, and promotions are earned by employees who work toward what I want. Achieving this alignment between behavior and compensation is not as easy as it might seem.
I might design a compensation system with the wrong weights. For example, I want a loan officer to approve low-risk loans and to reject high-risk loans. If my compensation system rewards approvals too much without sufficiently penalizing the approval of high-risk loans, the incentives will lead to a loan portfolio that is too risky.
A compensation system might put too much weight on clearly measurable attributes. If I pay writers by the word or software developers by the number of lines of code, I am neglecting to reward quality.
Organizations often require teamwork. If my performance metrics are all individually focused, I am inducing my employees to compete with one another, undermining the incentives for cooperation.
Compensation systems usually must leave room for informal judgment. There are desirable employee actions and behaviors that I cannot spell out ahead of time. When the time arrives for a performance review that will determine the employee’s bonus, raise, and/or promotion, I will want to take these unarticulated factors into account. But this leaves me open to charges of bias. Or the employee might think I played a game of “gotcha,” penalizing him on the basis of criteria I had not spelled out ahead of time.
Employees will want to maximize their compensation relative to the effort that they supply. This means that they will try to “game the system” in ways that I did not anticipate when I put the compensation plan into operation.
One way to game the system is to provide reports that are false or misleading. David Halberstam’s book on the Vietnam War, The Best and the Brightest, is filled with insights about organizational behavior. One theme that runs through the book is the way that officials in Washington became obsessed with finding and disseminating reports that indicated progress. Officers in the field learned that it was pointless to try to pass realistic assessments up the chain of command. Instead, good news would be appreciated and rewarded. As Halberstam saw it, the American military created an incentive system for lying to itself.
Many organizations rely primarily on informal criteria for promotion and compensation. In academia, the criteria for granting tenure will vary by department. The criteria will not be completely spelled out. Each person who has a vote in the tenure process makes a determination on a basis of “I’ll know it when I see it.” While this avoids what Jerry Muller calls “the tyranny of metrics,” it introduces other forms of bias and misalignment into the process.
In fact, I get the impression that misaligned incentives are a severe problem in academia. What gets rewarded in research are minor contributions that add to the prestige of leaders in the field. These leaders may not appreciate breakthrough ideas and even suppress such ideas using the peer-review process at journals and in their hiring choices.
Incentive misalignment also is rife in government. Managers reward behavior that makes their lives easier, regardless of whether it helps to achieve goals that serve the public.
The worst problem is when the incentive system selects for the wrong behavior. Military organizations are notorious for promoting generals in peacetime who are skillful at bureaucratic politics but useless at battling the enemy.
With so much science funded by government, we have nurtured a culture of grant-writers who lack the boldness and originality to generate important discoveries. The process is also heavily biased against young researchers and against those who lack a strong base of institutional support.
Financial regulation is a realm in which the means to game the system are available and the motivation to do is strong. New financial instruments and arrangements are developed to satisfy the letter of regulation while evading the spirit. The banks tend to be more adaptable and more determined than their regulators.
Incentive systems will tend to be better designed in profit-seeking businesses than in non-profits. In the profit-seeking world, if your firm’s compensation system does a better job than my firm’s compensation system of incentivizing employees to efficiently provide good service, then your firm will be more profitable while my firm’s profits will suffer. I will have to either figure out and implement a better way to do compensation, or I may go out of business. Competition and evolution will tend to favor compensation systems that achieve better alignment.
One phenomenon that you will notice in business is that a firm will overhaul its compensation system every few years. That may reflect changes in overall strategy. But it also may reflect the fact that over time, employees learn to get better at gaming an existing system. As you discover how they are taking advantage of weaknesses in the system, you have to modify it in order to get better performance.
The problem of setting up and implementing a compensation system is never solved. There is an ongoing game in which both management and employees are learning how to achieve their goals and adapting new behaviors in response to one another’s moves. The worker wants to get the most compensation for the least effort. The manager wants to get the worker to supply the most constructive effort for the least compensation.
This essay is part of a series on human interdependence.
Re: "I get the impression that misaligned incentives are a severe problem in academia."
There are deeper principal-agent problems in academia, especially in the context of selective, residential colleges or universities with a major endowment:
(a) Who is the principal?
The Board of Trustees, who have formal governance authority?
Or alumni, who have a lifelong stake in their alma mater's reputation?
Or the faculty, who enjoy tenure and formal governance authority over curriculum, appointments, and promotions?
Or the students, who are at once customers, inputs to one another's education and campus experience, and future alumni 'owners' of the credential (degree) that will signal the right stuff for career and marriage?
(b) What is the maximand ("mission")?
Mission statements are wildly grand -- promise the moon to students -- and are divorced from any performance check. See Brennan & Magness, *Cracks in the Ivory Tower,* ch. 3: "Why Most Academic Advertising Is Immoral Bullshit."
The much-aligned 3rd-party rankings of universities are indeed incomplete, imperfect performance metrics, but constitute nonetheless a rare bullshit-detector in higher education.
Mission capture is pervasive:
Faculty utilize their governance role and power within the organization to distort the mission for political agendas.
Administrators build bureaucracy for totalitarian governance of "student life."
Students demand country-club amenities and extra-curricular programs that crowd out education.
Targeted gifts by alumni indirectly dissipate in extraneous "budget relief."
Arnold;
I am not going to disagree with your headlining conclusion. It is counter-intuitive to many and yet absolutely correct. If you want a long term incentive structure based on short-term feedback cycles, you need to change the feedback criteria periodically to avoid short-term optimization that fits the requirements of the immediate feedback structure but doesn't actually hit the end goal in the relevant longer time frame. Instead, you need to keep changing which aspect/dimension of performance is in the spotlight, with lagging and leading (anticipatory) behaviors carrying much of the integration burden. In short, the performers can't move fast enough to give you exactly what you say you want most, so you get something closer to what you really want.
In the context of economics, business schools and management consultants, your article is a corrective to the various delusions which say that some new incentive system is a panacea. Instead, you actually need some of these fads... ironically, because it prevents people from effectively gaming an even higher scale system across many organizations. But it creates an opportunity to game as well - which is being played by those consultants, in waves.
The real headline about incentive structures is not that there is some difference in interests between principal and agent - which can exist and is worth calling out on occasion, but feels very much like a moral judgement on the dishonest steward - but instead that optimizing any system requires suboptimizing the subsystems. In short, you give someone a job and they do it to the best of their abilities, and you don't get what you want because... their job is only part of the whole. You need to make it impossible to do the job with perfect efficiency because that job is not the focus of the whole of society. The metric is not the target, the map is not the terrain, whatever you want to say about referents and reality. See 'seeing like a state' for how legibility is naturally necessary and distorting.
As an evolutionary biologist by training, practice, and perhaps now inclination, I spend quite a bit of time thinking about incentives. In vitro and in vivo, incentives matter and can be ruthless. In vivo, they ruthlessly lead to extinction of poor performers. In vitro, they ruthlessly lead to frustrated bioengineers, because the organisms play any game you give them better than you had imagined they could. So you end up changing the game constantly. One of the tropes of large scale biological engineering these days is to create two genomes in a single organism and switching them on and off, so that one goes from a small population to a large population in a chemical production batch, and then the other efficiently runs the biochemical process you desire most. At the end of the batch, you change the game again - by sterilizing the entire thing and killing them all. Its crude and inefficient, in reality, but this is the lengths we go to to avoid principal-agent problems. It wouldn't be popular in corporations, but...
The question of private sector, public sector is not necessarily that profit motives create the best incentive structures; but that the kinds of incentives that they internalize are the most readily aligned at a scale that is compatible with human thriving. Biology also creates great incentive structures; but they tend to look like an existential threat at the society level, because that's the cost of failure. I agree with you largely about the problem that non-profits pose in current society - but steelmanning requires that you better explore the cases that demand non-profit NGOs, or perhaps 'tax exempt/privately funded' would be a better nomenclature.