Business

When Deliberation Isn’t Smart

Cooperation, defection, and intuitions in the workplace

Share with your friends










Submit
More share buttons
Share on Pinterest

By Adam Bear and David Rand

Cooperation is essential for successful organizations. But cooperating often requires people to put others’ welfare ahead of their own. In this post, we discuss recent research on cooperation that applies the “Thinking, fast and slow” logic of intuition versus deliberation. We explain why people sometimes (but not always) cooperate in situations where it’s not in their self-interest to do so, and show how properly designed policies can build “habits of virtue” that create a culture of cooperation. TL;DR summary: intuition favors behaviors that are typically optimal, so institutions that make cooperation typically advantageous lead people to adopt cooperation as their intuitive default; this default then “spills over” into settings where it’s not actually individually advantageous to cooperate.

Life is full of opportunities to make personal sacrifices on behalf others, and we often rise to the occasion. We do favors for co-workers and friends, give money to charity, donate blood, and engage in a host of other cooperative endeavors. Sometimes, these nice deeds are reciprocated (like when we help out a friend, and she helps us with something in return). Other times, however, we pay a cost and get little in return (like when we give money to a homeless person whom we’ll never encounter again).

Although you might not realize it, nowhere is the importance of cooperation more apparent than in the workplace. If your boss is watching you, you’d probably be wise to be a team player and cooperate with your co-workers, since doing so will enhance your reputation and might even get you a promotion down the road. In other instances, though, you might get no recognition from, say, helping out a fellow employee who needs assistance meeting a deadline, or who calls out sick.

Get Evonomics in your inbox

A major aim of just about any organization is to promote cooperative behavior amongst its members: in general, companies (and governmental organizations) perform better when their employees work together rather than single-mindedly pursue their own personal goals. Managers who understand this fact institute policies that incentivize cooperation (e.g. through bonuses, promotions, or public recognition) and disincentivize defection (e.g. through fines, demotions, or public shaming)—the goal being to make it worth employees’ while to cooperate.

But, of course, these policies can only do so much: even with such incentives in place, somebody looking to exploit the system could find plenty of opportunities to free-ride without getting caught, thereby undermining the organization’s success. A key challenge for managers and policy makers, therefore, is to encourage cooperation even in the absence of institutional carrots and sticks.

In a recent paper published in the Proceedings of the National Academy of Sciences, we present a formal mathematical model that explores this relationship: how does incentivized cooperation relate to “pure” cooperation that occurs beyond the reach of incentives?

In this model, virtual agents interact with each other and receive various payoffs based on how they, and those with whom they interact, behave. As in the real world, our agents encounter a variety of situations in which they could pay a cost to cooperate, or could instead defect. In some of these situations, agents are rewarded for cooperating (and punished for defecting), whereas in other situations, agents always get a higher payoff from defecting. In other words, this first case models situations where employees are explicitly incentivized to be team players (e.g., with public recognition or the promise of a promotion); the second case, conversely, models situations where employee can help each other, but won’t get “credit” for doing so.

Unlike classical economic models, we incorporate a more sophisticated take on decision-making from behavioral economics and psychology (recently popularized by Nobel-prized winner Daniel Kahneman). Instead of always carefully reasoning their way through their decisions, our agents sometimes use intuition – a generalized “gut feeling” (or heuristic) about the best way to act that doesn’t depend on the specifics of the situation being faced. These intuitive responses have the advantage of being quick and not requiring much cognitive effect; but the limitation of being insensitive to situation.

When, on the other hand, agents do choose to think carefully, or “deliberate”, they realize whether it is in their self-interest to cooperate or not, and get to choose accordingly. But deliberation comes at a cost: thinking takes time and effort. And it can even damage your social reputation if you come off as a “calculating” kind of person.

We then use game theory to figure out the best strategy for our agents. The answer, crucially, depends on the institutional environment.

First, consider institutions that rarely provide incentives for people to cooperate (i.e., defection is the payoff-maximizing option in most social interactions) – for example, employees in companies that only reward individual achievement and don’t penalize back-stabbers. Under such institutions, the optimal behavior is to develop a selfish non-cooperative gut response, and to always go with that gut response (i.e. to never stop and consider whether future consequences exist, because they typically don’t). This lack of deliberation means that these agents won’t even cooperate in the (relatively rare) instances in which it could be payoff maximizing for them to do so.

The results are very different, however, for institutions that do typically provide incentives for people to cooperate. Not only is it optimal to have the opposite heuristic—intuitive cooperation—but it is also sometimes worth deliberating. In other words, the best-performing strategy cooperates by default, but occasionally checks whether it’s in a situation where it can get away with defecting.

Interestingly, the more the institution incentivizes cooperation, the less its worth bothering to deliberate, and the more likely agents are to just stick with their cooperative gut response. So strengthening institutional incentives to cooperate doesn’t just make people more likely to cooperate when these incentives are present, but also makes people more likely to intuitively cooperate when these incentives aren’t present and people could get away with defecting.

This isn’t just theoretical: in a paper recently published in Management Science, Rand and co-author Alex Peysakhovich present experiments that provide empirical evidence of these effects. Participants were given money, and chose whether to keep it for themselves, or give it up to benefit another participant. In the first part of the experiment, they were assigned to one of two institutions: a “strong” institution that created future consequences (people were likely to interact again in the next round with their current partner), and a “weak” institution where participants could get away with being selfish (partners were remixed frequently, and not informed of each other’s behavior with previous partners). In the second part of the experiment, participants had the chance to pay to help anonymous strangers. Participants who were habituated to cooperating by the strong institution were much more cooperative, altruistic, and trustworthy in the second stage compared to participants who got used to being selfish under the weak institution. Furthermore, this change in willingness to help without future reward in stage two was much more pronounced in participants who tended to rely on intuition; deliberative participants were relatively unaffected by their experiences in stage one.

Taken together, these theoretical and experimental results demonstrate the immense power that institutions can play in establishing norms of cooperation. When institutions foster these norms, they don’t just compel people to cooperate when it’s in their self-interest to do so; they—by shaping heuristics—lead to people cooperating even when it’s not in their self-interest to do so. In other words, if employees work at an organization where teamwork is encouraged and frequently rewarded, these employees will also be more likely to help colleagues out even when such acts go unnoticed.

Adam Bear is a 3rd-year Ph.D. student in Psychology at Yale. His main research explores the interplay between unconscious, intuitive mental processes and conscious, deliberative processes across a variety of domains, including cooperation, choice, and visual perception. His current work considers not only how the mind makes use of both kinds of cognition, but also why we would evolve to do so in the first place.

David Rand is an assistant professor of Psychology, Economics, and Management at Yale University, and director of Yale’s Human Cooperation Laboratory. His research combines theoretical and experimental methods to explain the high levels of cooperation that typify human societies, and to uncover ways to promote cooperation in situations where it is lacking. He has argued that intuitive processes play a key role in supporting cooperation, that social incentives like recognition and reputational benefits are powerful tools for increasing cooperation, and that leniency and forgiveness are smart strategies for success in our accident-prone world.

Originally published here.

2016 January 25


Donating = Changing Economics. And Changing the World.

Evonomics is free, it’s a labor of love, and it's an expense. We spend hundreds of hours and lots of dollars each month creating, curating, and promoting content that drives the next evolution of economics. If you're like us — if you think there’s a key leverage point here for making the world a better place — please consider donating. We’ll use your donation to deliver even more game-changing content, and to spread the word about that content to influential thinkers far and wide.

MONTHLY DONATION
 $3 / month
 $7 / month
 $10 / month
 $25 / month

ONE-TIME DONATION
You can also become a one-time patron with a single donation in any amount.

If you liked this article, you'll also like these other Evonomics articles...




BE INVOLVED

We welcome you to take part in the next evolution of economics. Sign up now to be kept in the loop!

  • The article posits a difference between the intuitive cooperator and the deliberator or calculating cooperator. The intuitive cooperator seems to be more automatic in their compliant cooperation whereas the calculator is characterized as perhaps being perceived to be more selfish. However, the framework of the study does posit two extrinsic rewards: reputation enhancement and the probability of promotion. Aside from the dubious possibility of actually earning these two rewards, the act of cooperation is characterized as necessary for the organization, a sort of fiduciary commitment or responsibility. So, there is something of a complexity: cooperating is encouraged and non-cooperation is possibly punished because cooperating is principled behavior, necessary for the success of the organization, but the extrinsic rewards of reputation and promotion are calculable probabilities, perhaps the difference here being whether other key personnel find out that one has or has not been cooperative. The intuitive cooperator is constructed as automatically cooperating which means not calculating the personal gain or loss but the nature of the intrinsic reward is not spelled out – is the intuitive cooperator merely conditioned to automatically cooperate? On the darker side, we could posit that there are intuitive punishers or enforcers of certain rules to cooperate and there are those who calculate whether to enforce “rules” and punish others, perhaps differentially accepting calculating behavior from people they like. In sum, it is not exactly clear how the distinction between types of cooperators can be justified. Are intuitive cooperators considered better, have they developed individually under particular contingencies where learning to automatically cooperate was always approved? Game theory scenarios are insightful but theory needs to be complemented by situational descriptions and analyses.

  • Travis Higgins

    Let’s say I’m a manager that is convinced of the enormous payoff that comes from incentivizing cooperation.

    The first thing I could do is remove DISincentives. That is, any obvious management-driven impediments to cooperation, like metrics/targets for individuals and personal reviews by performance.

    After that, I’d need to be more creative and look for ways to INcentivize. I may find that there is a need to make trade-offs between short-term results and dedicating time to efforts that help “re-program” intuitive cooperation.

    QUESTIONS:
    (1) How do I asses my company on cooperative “readiness”?
    (2) Is there a method for characterizing actual institutions as “strong” or “weak” that stands up to this kind of testing? Meaning can we effectively evaluate a work environment such that we can predict how their members will respond to these kinds of cooperation games?
    (3) Assuming I know how bad my problem is, how do I decide what to do about it? How do I prioritize teamwork-incentivizing efforts in order from most to least helpful?

  • If by organizations you mean companies, I believe you look past the boundaries of its walls (metaphorically speaking). Rather than directly incentivize employees’ ‘goodness’ – create a environment of benevolence and giving towards the community. A company can give time off for sponsored volunteer projects. And by projects I don’t just mean donating money or walking in a cancer 5K; but let employees organize projects that directly help the community the company resides in. These could include mentoring programs for young people, playground restoration and shut-in food relief.

    There is a concept called obliquity which states that goals are best achieved via indirect means. Jack Welsh, former iconic CEO of GE, once said he doesn’t concern himself with quarterly profits but rather making sure GE is the leading innovator in the niches they compete. If that happens, he said … the profit will be there. I believe a company or organization should take a similar approach, especially when it comes to benevolence. Also, why would a company want to be so selfish that it keeps the ‘goodness’ of its employees all to itself. Ultimately the wellbeing of a company will be dependent on the community it’s housed.