By Garett Jones
We humans are a social species: we rely on each other to get things done. Whether it’s building a car, creating a happy marriage, or holding a potluck dinner at church, we usually need to cooperate in order to achieve the big successes in life.
Why is cooperation so hard? Because cooperating is often against your own best interest. When you’re going to a potluck dinner, the smart thing to do is to bring a bag of chips while sampling other people’s delicious casseroles. At some point before you arrive, you might think, “If everyone does that, then all we’ll have at the potluck is twenty bags of chips.” And that’s true enough, but you have no influence over whether those other nineteen people bring chips or casseroles, so why not do what’s best for yourself: chips it is.
But in real life, cooperation is fairly common, even when the temptation to betray is strong. Why? Economists found one solution in the early days of the field known as “game theory”: that once you turn a one-shot prisoner’s dilemma into a repeated game, it’s possible for selfish players to rationally cooperate with each other, not out of a sense of generosity but out of pure self-interest. This result—that repetition can turn lemons into lemonade—is known as the “folk theorem.” That’s because it seemed fairly obvious once people started thinking about it, and no one economist was really willing to take credit for the idea.
Get Evonomics in your inbox
One researcher—a political scientist, Robert Axelrod—went further than this. He saw repeated prisoner’s dilemmas (RPDs) everywhere in politics and society, and so he concluded that if he could find out how to get people cooperating rather than descending into bitter defection, he could help make the world a more peaceful place. It sounds a bit naive—but it was nothing of the sort. Axelrod’s research, summed up in his excellent book The Evolution of Cooperation, is still used by peace negotiators, labor-management mediators, and nuclear arms reduction experts. His is an agenda that has made the world a better, safer place. And it began by just taking the repeated prisoner’s dilemma seriously, so seriously that Axelrod decided to get a lot of social scientists together to play some games.
Axelrod ran a competition—not in real life, but on some 1970s- era computers. He invited social scientists, mathematicians, anyone interested to submit a simple computer program giving instructions to one of the electronic “players” in a two-person repeated prisoner’s dilemma game. The winner of the tournament would be the contestant whose program could win the most points when pitted against the other computer programs. The most points, naturally, clocked in when the other player cooperated while you defected; when both cooperated you got a good outcome, but not as good as when you were exploiting the other player.
So dozens of researchers proposed dozens of computer programs for the tournament. As you can imagine, some programs were quite sophisticated, looking for ways to dupe the other computerized player into cooperating so that the entrant could exploit his partner for at least a few rounds of the tournament. But not every program was sophisticated; in fact one program followed the simplest rule possible: “always cooperate.” Which computer program—which strategy—won the entire tournament? It’s known by the phrase: tit for tat.
Tit for tat combines an open right hand with an armed left hand. In a society filled with tit for tatters, people would always cooperate, not because those people were doormats or naïfs, but because any potential cheater would know that she would be quickly punished. So, tit for tat is a good strategy—something worth keeping in mind the next time you have an argument with the neighbors over who should fix the broken fence.
But Axelrod wanted to do more than just pinpoint a good computer program: he tried to distill the essence of what made tit for tat and a few similar strategies work so well in order to convey those lessons to the world. He came up with some principles for encouraging cooperation in repeated prisoner’s dilemma settings. Three of them matter for us: think of them as the Three P’s of the RPD. Players should
- Be patient: Focus on the long-term benefits of finding a way to cooperate—don’t just focus on the short-run pleasures, whether it’s the pleasure of exploitation or the pleasure of punishment. Axelrod calls this “extending the shadow of the future.”
- Be pleasant: Start off nice—make sure those bared teeth are part of a smile. And later in the game, take the ABBA approach, and take a chance on cooperating every now and then, even when things have gone south for a while.
- Be perceptive: Figure out what game you’re playing—know the rules, and know the benefits and costs of cooperation.
I claim that people with higher IQs will be better at all three. That higher-IQ players tend to follow the third piece of advice, “Be perceptive,” is almost obvious: higher-IQ individuals are just more likely to get it, to grok the key ideas, as sci-fi writer Robert Heinlein used to say. Not always, not perfectly, but on average individuals with high IQ are better at grokking the rules of the social game, they’re more socially intelligent. As dozens of psychology and economics experiments demonstrate, high IQ also tends to predict patient behavior. Those who see the patterns in the Raven’s Progressive Matrices also see the future. That means that in a repeated prisoner’s dilemma, they’ll tend to focus on the rewards of long-term cooperation, not the short-term thrills of punishment or exploitation.
My final claim is that higher-IQ people are nicer than most other people—at least when they’re in settings such as the repeated prisoner’s dilemma. Can that really be the case? You might expect higher-IQ people to be a little meaner in some cases—they might try to exploit people if they figure out a way to exploit them. That might be important in some settings, but there are interesting new experiments that show how high IQ predicts generosity.
Economist Aldo Rustichini and his coauthors gave IQ tests to a thousand people enrolled in a truck-driving school, and then they had them play a trust game. A typical trust game—first invented by my George Mason colleague Kevin McCabe and his coauthors—works like this. The game has just two players, each making one choice. They can’t see each other, and they never know who they’re actually playing; in most cases, they’re just facing a computer terminal. First, Player 1 starts with $5; he then decides how much of his money (if any!) to send to Player 2 and how much to keep for himself. If some of the money is sent over, the money sent magically triples in value. So if Player 1 sent over $2, Player 2 now has $6. Player 2 now gets to decide how much money to return to Player 1; she can return nothing and keep all $6, she can return all $6 and keep nothing for herself, or she can do something in between. Since McCabe and coauthors invented this experiment, it’s been run numerous times: most players return just about the amount that Player 1 sent over—in other words, the average person is trustworthy, but no philanthropist.
Most people are interested in the question of “Who reciprocates? Who is trustworthy?” But here we’re interested not in Player 2 but in Player 1: Who’s the biggest sucker? Who takes the chance on sending money over—without a formal contract, without being able to even see the other person? Wouldn’t we expect players with lower IQ scores to naively send over cash, in the hope that Player 2 will be generous? Wouldn’t we expect a higher-IQ Player 1 to figure out that Player 2 has no incentive to be kind? We might, but in fact, Rustichini found just the opposite: the higher-IQ students in truck-driving school sent over more money than their classmates with lower IQs. So smarter players are more likely to start off by playing nice. This result—that IQ predicts “generous” or “nice” behavior—was backed up by a German study, a team- effort problem: a few players are each given a few Euros, and they each have to decide how much to chip in to the pot. If the total amount chipped in is greater than, say, 10€, then the pot doubles, and the amount in the pot is split equally between all the players; if not, the pot evaporates, with nobody getting anything except the money they held out of the pot. In this study, higher-IQ players put more into the pot. They may have done it out of kindness to others, or they may have done it because they shrewdly calculated that they had a decent chance of being the donor who pushed the pot over the 10€ threshold, so it’s hard to tell what their motives were. But in any case, smarter players chipped in more, and what they chipped in helped everyone in the group. They were more pleasant.
Another study by Brown University economist Louis Putterman and his coauthors found still more evidence that higher-IQ individuals are more likely to start off by playing nice, by being generous team players. In this game, known as the public goods game, players individually decide how much of their own money to put in a metaphorical pot, the money doubles or triples, and then it gets divided up among the group. When you give money, you’re directly contributing to the public good. The game was repeated for a few rounds with the same team so players would have a chance to learn from each other, a chance to find a path to cooperation.
As this was run at Brown University, an Ivy League school where one might expect that almost all students are raised in incredibly advantaged environments, it might seem that differences in IQ scores would be irrelevant. But in Putterman’s cooperation experiment, IQ mattered. He and his coauthors found that higher- IQ students at Brown put more money in the pot during the early rounds of the game: the higher-IQ students were more pleasant early on. That’s the smart thing to do, because extra money early on can send a signal of kindness, of cooperativeness, to the other players. And it’s worth noting that in another part of the experiment, when the students could vote on a way to penalize low contributors, higher-IQ students were more likely to vote for a rule that would penalize the non-cooperators: so higher-IQ students were pleasant, but not naive.
Intelligence as a Way to Read the Minds of Others
But just how socially perceptive are higher-IQ people? After all, being nice in a lab experiment might not translate into real-world social interactions, and while IQ predicts social intelligence in surveys, it would be good to have a concrete test of social perceptiveness.
One test by economist David Cesarini and his coauthors illustrates the ability of higher-IQ individuals to understand the minds of others. The Keynesian Beauty Contest, as it is known, is a game in which all the players are asked to pick a number from zero to one hundred. A prize will be given to the person whose guess is closest to, say, one-half of the group’s average guess. In the event of a tie they might split the prize among the best guesses. So if almost everyone chose fifty but just one person chose thirty, that lower guess would win. If the players were all perfectly rational, and they knew that everyone else in the game was equally rational, they would realize that the winning answer would be the only number that is exactly one half of itself: zero.
But people aren’t perfectly rational and—here’s the good part— people who are more rational are more likely to be aware of just how irrational most people are. So while the weaker players would pick numbers close to randomly—guessing on average fifty or a little below—someone better-skilled might realize that the group combines some sharper players with some weaker players, and so submit a guess quite a bit lower than fifty. But isn’t there a chance that higher-IQ players make the mistake of thinking that everyone is as smart as they are? Or might they overthink the situation, foolishly submitting zero as the right answer? In a study of Swedes, Cesarini and coauthors found that players with the highest IQs submitted numbers that were low but not too low; indeed they gave answers that were strikingly close to the best possible answer. By contrast, players in the bottom of the IQ distribution gave answers that tended to be far too high. IQ predicted not just individual rationality but a better view into the minds of others. And later a second study came to the same finding using another IQ-type test.
Overall, mental test scores predict the ability to understand the minds of others.
It’s possible that the link between IQ and cooperation won’t seem like any great surprise to you: these experiments are just games, an IQ test is a game, and people who are good at one kind of game are often good at other games. But life is a game as well.
In the field of psychology it’s well-known that higher IQ predicts greater openness to new experiences, a greater willingness to try new things out. In addition to being more open to new things, the person with the higher test score is more likely to understand the rules, more likely to figure out when being nice is worth it and when it’s a fool’s errand, and more likely to figure out when it’s best to cut her losses when the investment in kindness isn’t paying off. Assessing the situation: that’s a skill one would expect to be more common among people with higher test scores. If an entire group of individuals with higher IQs are together for a reasonably long period of time, we should expect them to find more win-win outcomes, growing a bigger pie that they can squabble over later.
I can’t tell you how many times I’ve met people from all walks of life who’ve told me that smarter people lack common sense, that they overthink and overstrategize issues to their detriment. If that were the case then smarter groups would likely turn out to be “too big for their britches” and collapse into endless rounds of cheating; failed attempts at exploitation; and continual, costly punishment. Certainly that happens sometimes, but on average, that is not the case. Smarter groups tend to be more cooperative. This finding, which shows up both in lab experiments and in free-form negotiation studies, means that intelligent groups have more social intelligence. That helps explain why countries with high average test scores usually have stronger economies and more effective governments.
Excerpt from Hive Mind: How Your Nation’s IQ Matters So Much More Than Your Own by Garett Jones.(c) 2016 by the Board of Trustees of the Leland Stanford Jr. University. All rights reserved. Published by Stanford University Press in hardcover and digital formats, sup.org. No reproduction or any other use is allowed without the publisher’s prior permission.
2016 March 13