Am I too nice at work, or should I be more of a jerk? This question comes up a lot in my coaching conversations about leadership. As a coach, my job is not to answer that question directly. I help clients explore the question and draw their own conclusions, whilst at the same time framing and developing their own leadership style. It turns out that a branch of mathematics called game theory does provide us with an answer. According to author W. Mitchell Waldrop, in his book Complexity: The Emerging Science at the Edge of Order and Chaos, :
Nice guys* – or more precisely, nice forgiving, tough, and clear guys – finish first.
Finding your nice!
Waldrop explores why, in a competitive world, do organisms co-operates with one another? Why do they leave themselves open to allies who could easily turn on them?
The essence of the problem is neatly captured by a scenario known as the prisoner’s dilemma which was originally developed in game theory, the study of mathematical models of strategic interaction between rational decision-makers. The prisoner’s dilemma is a game that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. Waldrop explains:
Two prisoners are being held in separate rooms, and the police are interrogating both of them about a crime they committed jointly. Each prisoner has a choice: inform on his partner (defect) or remain silent (co-operate – with his partner, not the police). The prisoners know that if both of them remain silent, then both of them will be released. The police are aware of this. So they offer the prisoners a little incentive: if one defects and informs on his partner, then that prisoner will be granted immunity and go free – and will get a reward. The partner, meanwhile will be sentenced to the maximum – and will be given a fine to cover the first prisoners reward. If both prisoners rat on each other, then both serve the maximum and neither gets a reward.
So what do the prisoners do – cooperate or defect? On the face of it, they ought to cooperate with each other and keep their mouths shut, because that way they get the best results: freedom. But then they get to thinking. Prisoner A realises that there is no way he can trust his partner not to turn state’s evidence and walk off with a fat reward. The temptation is too great. He also realises that his partner is thinking exactly the same thing about him. So prisoner A concludes that the only sane response is to defect and tell the police everything, because if his partner is crazy enough to keep his mouth shut, then prisoner A will be the one walking out with the cash. And if his partner does the logical thing and talks – well, since prisoner A has to serve time anyway, at least he won’t be paying a fine on top of it. The upshot is that both prisoners are led by ruthless logic to the least desirable outcome: jail.
In the real world the dilemma of trust and cooperation is rarely so stark. Negotiations, personal ties, enforceable contracts, and any number of other factors affect the players’ decisions. Nonetheless, the prisoner’s dilemma does capture of a depressing amount of truth about mistrust and the need to guard against betrayal. Consider the Cold War, when the two superpowers are locked themselves in to a 40 year arms race that was ultimately to the benefit of neither, or the seemingly endless Arab-Israeli deadlock, or the internal temptation for nations to erect protectionist trade barriers. Or in the natural world, consider that an overly trusting creature might very well get eaten. So once again: why should any organism ever dare to come operate with another?
The answer came in the 1970s with a computer tournament organized by Robert Axelrod, a political scientist with a long-standing interest in the cooperation question. Axelrod’s idea for the tournament was straightforward: anyone who liked can enter a computer program that would take the role of one of the prisoners. The programmes would then be paired up in various combinations, and they would play the prisoner’s dilemma game with each other by choosing whether to co-operate or defect. But there was a wrinkle: instead of playing the game just once, each pair of programs would play it over and over again for 200 moves. This would be what game theorists called an iterated prisoner’s dilemma, arguably a much more realistic way of representing the kind of extended relationships we usually get into with each other. Moreover, this repetition would allow the programs to base their cooperate / defect decisions on what the other programme had done in previous moves. If the programmes meet only once, then defection is obviously the only rational choice. But when they meet many times, then each individual programme will develop a history and a reputation. And it was far from obvious how the opposing programme should deal with that. Indeed that was one of the main things Axelrod wanted to learn from the tournament: what strategy is will introduce the highest pay offer over the long run? Should have a programme always turn the other cheek and cooperate regardless of what the other player does? Should it always be a rat and defect? Should it respond to the other player’s moves in some more complex manner? And if so, what?
The 14 programmes submitted in the first round of the tournament embodied a variety of complex strategies. But much to the astonishment of Axelrod and everyone else, the crown when to simplest strategy of all: TIT FOR TAT. Submitted by psychologists Anatol Rapaport of the University of Toronto, TIT FOR TAT would start out by co-operating on the first move, and from then on out would do exactly what the other programme had done on the move before. That is, the TIT FOR TAT strategy incorporated the essence of the carrot and stick. It was “nice” in the sense that it would never defect first. It was “forgiving” in the sense that it would reward good behaviour by cooperating the next time. And yet it was “tough” in the sense that it would punish un-cooperative behaviour by defecting the next time. Moreover, it was “clear” that the opposing programmes could easily figure out what they were dealing with.
TIT FOR TAT
Axelrod was conscious that, because only a handful of programmes were entered in the tournament, there was always the possibility that TIT FOR TAT success was a fluke. But maybe not. Of the 14 programmes submitted, eight were “nice” and would never defect first. And every one of them easily outperformed the six not-nice rules.
To settle the question Axelrod held a second round of the tournament specifically invited people to try to knock TIT FOR TAT off its throne. Sixty-two entants tried and TIT FOR TAT won again. The conclusion was almost inescapable. Nice guys* – or more precisely, nice forgiving, tough, and clear guys – can indeed finish first.
* His book was first published in 1993. I’m sure he means girls as well.