Prisoner's dilemma

Will the two prisoners cooperate to minimise total loss of liberty or will one of them, trusting the other to cooperate, betray him so as to go free?
Will the two prisoners cooperate to minimise total loss of liberty or will one of them, trusting the other to cooperate, betray him so as to go free?

The prisoner's dilemma is a type of non-zero-sum game (game in the sense of Game Theory). In this game, as in many others, it is assumed that each individual player ("prisoner") is trying to maximise his own advantage, without concern for the well-being of the other player.

The Nash equilibrium for this type of game does not lead to Pareto optimums (jointly optimum solutions). In equilibrium, each prisoner chooses to defect even though the joint payoff would be higher by cooperating. Unfortunately for the prisoners, each has an individual incentive to cheat even after promising to cooperate. This is the heart of the dilemma.

In the iterated prisoner's dilemma the game is played repeatedly. Thus each player has an opportunity to "punish" the other player for previous non-cooperative play. Cooperation may then arise as an equilibrium outcome. The incentive to cheat may then be overcome by the threat of punishment, leading to the possibility of a superior, cooperative outcome.

Contents

The classical prisoner's dilemma

The classical prisoner's dilemma (PD) is as follows:

Two suspects A, B are arrested by the police. The police have insufficient evidence for a conviction, and having separated both prisoners, visit each of them and offer the same deal: if one turns Kings Evidence against the other and the other remains silent, the silent accomplice receives the full 10-year sentence and the betrayer goes free. If both stay silent, the police can only give both prisoners 6 months for a minor charge. If both betray each other, they receive a 2-year sentence each.

It can be summarised thus:

Prisoner A Stays Silent Prisoner A Betrays
Prisoner B Stays Silent Both serve six months Prisoner B serves ten years; Prisoner A goes free
Prisoner B Betrays Prisoner A serves ten years; Prisoner B goes free Both serve two years

Assume both prisoners are completely selfish and their only goal is to minimise their own jail terms. Each prisoner has two options: to cooperate with his accomplice and stay quiet, or to betray his accomplice and give evidence. The outcome of each choice depends on the choice of the accomplice. However, neither prisoner knows the choice of his accomplice. Even if they were able to talk to each other, neither could be sure that they could trust the other.

Now, let's assume our protagonist prisoner is rationally working out his best move. If his partner stays quiet, his best move is to betray as he then walks free instead of receiving the minor sentence. If his partner betrays, his best move is still to betray, as by doing it he receives a relatively lesser sentence than staying silent. At the same time, the other prisoner thinking rationally would also have arrived at the same conclusion and therefore will betray. Thus, in a game of PD played only once by two rational players both will betray each other and the world will become a place for monsters. Betrayal is their only rational choice.

However, if only they could arrive at a conspiracy, if only they could be sure that the other player would not betray, they would both have stayed silent and achieved a better result. However, such a conspiracy cannot exist, as it is vulnerable to the treachery of selfish individuals, which we assumed our prisoners to be. Therein lies the true beauty and the maddening paradox of the game.

If only they could both cooperate, they would both be better off; however, from a game theorist's point of view, their best play is not to cooperate but to betray. This treacherous quality of the deceptively simple game has inspired libraries full of books, made it one of the most popular examples of game theory and made some people appeal for banning studies on the game.

If reasoned from the perspective of the optimal interest of the group (of two prisoners), the correct outcome would be for both prisoners to cooperate with each other, as this would reduce the total jail time served by the group to one year total. Any other decision would be worse for the two prisoners considered together. However by each following their selfish interests, the two prisoners each receive a lengthy sentence.

The Generalised Form

We can expose the skeleton of the game by stripping it away from the Prisoners’ story. The generalized form of the game has been used frequently in an experimental economics context; the rules are:

There are two players and a banker. Both players hold a set of two cards each. In a set the two cards are printed with "Cooperate" and "Defect" on one side (these, by the way, are the standard terminology for the game). Each player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other players move is eliminated (we are ignoring the possibility that these players may have signs that betray their move, not that it matters in the single-deal versionTemplate:Mn). At the end of the turn, the banker turns over both cards and gives out the payment accordingly.

If player 1 defects and player 2 cooperates, player 1 gets the Temptation to Defect (once again these are standard terminology) payoff of 5 points while player 2 receives the Suckers payoff of 0 points. If both cooperate they get the Reward for Mutual Cooperation payoff of 3 points each, while if they both defect they get the Punishment for Mutual Defection payoff of 1 point. The checker board payoff matrix showing the payoffs is given below.

Canonical PD payoff matrix
Cooperate Defect
Cooperate 3, 3 0, 5
Defect 5, 0 1, 1

In "win-win" terminology the table would look like this:

Cooperate Defect
Cooperate win-win lose much-win much
Defect win much-lose much lose-lose


These point assignments are given arbitrarily for illustration. It is possible to generalise them. Let T stand for Temptation to Defect, R for Reward for Mutual Cooperation, P for Punishment for Mutual Defection and S for Suckers Payoff. Then the following inequality must hold,

T>R>P>S

If the game is iterated (played more than once in a row), another inequality must also hold for reasons explained later,

T+S < 2R

These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of a typical game of Prisoners Dilemma.

A similar but different game

The cognitive scientist Douglas HofstadterTemplate:Mn once suggested that people often find problems such as the PD problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":

Two people meeting and exchanging closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honour the deal by putting into his bag what he agreed, or he can defect by handing over an empty bag.

In this game, defection is always the best course, implying that rational agents will never play, and that "closed bag exchange" will be a missing market due to adverse selection.

Real-life examples

These particular examples, involving prisoners and bag switching and so forth, may seem contrived, but there are in fact many examples in human interaction as well as interactions in nature that have the same payoff matrix. The prisoner's dilemma is therefore of interest to the social sciences such as economics, politics and sociology, as well as to the biological sciences such as ethnology and evolutionary biology. Many Natural processes have been abstracted into models in which living beings are engaged in endless games of Prisoners Dilemma. This wide applicability of the PD, gives the game its substantial importance.

In political science, for instance, the PD scenario is often used to illustrate the problem of two states engaged in an arms race. Both will reason that they have two options, either to increase military expenditure or to make an agreement to reduce weapons. Neither state can be certain that the other one will keep to such an agreement; therefore, they both incline towards military expansion. The paradox is that both states are acting "rationally", but producing an apparently "irrational" result.

Another interesting example concerns a well-known concept in cycling races, for instance in the Tour de France. Consider two cyclists halfway in a race, with the peloton (larger group) at great distance. The two cyclists often work together (mutual cooperation) by sharing the tough load of the front position, where there is no shelter from the wind. If neither of the cyclists makes an effort to stay ahead, the peloton will soon catch up (mutual defection). An often-seen scenario is one cyclist doing the hard work alone (cooperating), keeping the two ahead of the peloton. In the end, this will likely lead to a victory for the second cyclist (defecting) who has an easy ride in the first cyclist's slipstream.

William Poundstone, in a book about the PD (see References below), describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for someone to take a paper without paying (defecting) but very few do, recognising the resultant harm if everybody stole newspapers (mutual defection). Since the pure PD is simultaneous for all players (with no way for any player's action to have an effect on another's strategy) this widespread line of reasoning is called "magical thinking".Template:Mn

Lastly, the theoretical conclusion of PD is one reason why, in many countries, plea bargaining is forbidden. Often, precisely the PD scenario applies: it is in the interest of both suspects to confess and testify against the other prisoner/suspect, even if each is innocent of the alleged crime. Arguably, the worst case is when only one party is guilty — here, the innocent one is unlikely to confess, while the guilty one is likely to confess and testify against the innocent.

Many real-life dilemmas involve multiple players. Although metaphorical, Hardin's tragedy of the commons may be viewed as an example of a multi-player generalisation of the PD: Each villager makes a choice for personal gain or restraint. The collective reward for unanimous (or even frequent) defection is very low payoffs (representing the destruction of the "commons"). However, such multi-player PDs are not formal as they can always be decomposed into a set of classical two-player games.

The iterated prisoner's dilemma

In his book The Evolution of Cooperation (1984), Robert Axelrod explored an extension to the classical PD scenario, which he called the iterated prisoner's dilemma (IPD). In this, participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament. The programs that were entered varied widely in algorithmic complexity; initial hostility; capacity for forgiveness; and so forth.

Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, "greedy" strategies tended to do very poorly in the long run while more "altruistic" strategies did better, as judged purely by self-interest. He used this to show a possible mechanism to explain what had previously been a difficult hole in Darwinian theory: how can seemingly altruistic behaviour evolve from the purely selfish mechanisms of natural selection?

The best deterministic strategy was found to be "Tit for Tat", which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, do what your opponent did on the previous move. A slightly better strategy is "Tit for Tat with forgiveness". When your opponent defects, on the next move you sometimes cooperate anyway with small probability (around 1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents. "Tit for Tat with forgiveness" is best when miscommunication is introduced to the game. That means that sometimes your move is incorrectly reported to your opponent: you cooperate but your opponent hears that you defected.

By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.

Nice
The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice. Therefore a purely selfish strategy for purely selfish reasons will never hit its opponent first.
Retaliating
However, Axelrod contended, the successful strategy does not obey all biblical qualities. It must always retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such softies.
Forgiving
Another unbelievable quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximising points.
Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score more than the opponent).

Therefore, Axelrod reached the Utopian-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious. One of the most important conclusions of Axelrods study of IPDs is that Nice guys can finish first.

Reconsider the arms-race model given in the classical PD section above: It was concluded that the only rational strategy was to build up the military, even though both nations would rather spend their GDP on butter than guns. Interestingly, attempts to show that rival states actually compete in this way (by regressing "high" and "low" military spending between periods under iterated PD assumptions) often show that the posited arms race is not occurring as expected. (For example Greek and Turkish military spending does not appear to follow a tit-for-tat iterated-PD arms-race, but is more likely driven by domestic politics.) This may be an example of rational behaviour differing between the one-off and iterated forms of the game.

The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the Tit-for-Tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being Tit-for-Tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.

Deriving the optimal strategy is generally done in two ways:

  1. Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit-for-tat, 50% always cooperate) an optimal counter-strategy can be derived mathematicallyTemplate:Mn.
  2. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population.

Although Tit-for-Tat was long considered to be the most solid basic strategy, a team from Southampton University in England introduced a new strategy at the 20th-anniversary Iterated Prisoner's Dilemma competition, which proved to be more successful than Tit-for-Tat. This strategy relied on cooperation between programs to achieve the highest number of points for a single program. The University submitted 60 programs to the competition, which were designed to recognise each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realised that it was playing a non-Southampton player, it would continuously defect in an attempt to minimise the score of the competing program. As a resultTemplate:Mn, this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom. Although this strategy is notable in that it proved more effective than Tit-for-Tat, it takes advantage of the fact that multiple entries were allowed in this particular competition. In a competition where one has control of only a single player, Tit-for-Tat is almost certainly a better strategy.

If an iterated PD is going to be iterated exactly N times, for some known constant N, then there is another interesting fact. The Nash equilibrium is to defect every time. That is easily proved by induction. You might as well defect on the last turn, since your opponent will not have a chance to punish you. Therefore, you will both defect on the last turn. Then, you might as well defect on the second-to-last turn, since your opponent will defect on the last no matter what you do. And so on. For cooperation to remain appealing, then, the future must be indeterminate for both players. One solution is to make the total number of turns N random. The shadow of the future must be indeterminably long.

Another odd case is "play forever" prisoner's dilemma. The game is repeated infinitely many times, and your score is the average (suitably computed).

The prisoner's dilemma game is fundamental to certain theories of human cooperation and trust. On the assumption that the PD can model transactions between two people requiring trust, cooperative behaviour in populations may be modelled by a multi-player, iterated, version of the game. It has, consequently, fascinated many, many scholars over the years. As of 1975, Grofman and Pool estimate the count of scholarly articles devoted to it at over 2000.

Learning psychology and game theory

Where game players can learn to estimate the likelihood of other players defecting, their own behaviour is influenced by their experience of that of others. Simple statistics show that inexperienced players are more likely to have had, overall, atypically good or bad interactions with other players. If they act on the basis of these experiences (by defecting or cooperating more than they would otherwise) they are likely to suffer in future transactions. As more experience is accrued a truer impression of the likelihood of defection is gained and game playing becomes more successful. The early transactions experienced by immature players are likely to have a greater effect on their future playing than would such transactions affect mature players. This principle goes part way towards explaining why the formative experiences of young people are so influential and why they are particularly vulnerable to bullying, sometimes ending up as bullies themselves.

The likelihood of defection in a population may be reduced by the experience of cooperation in earlier games allowing trust to build upTemplate:Mn. Hence self-sacrificing behaviour may, in some instances, strengthen the moral fibre of a group. If the group is small the positive behaviour is more likely to feedback in a mutually affirming way encouraging individuals within that group to continue to cooperate. This is allied to the twin dilemma of encouraging those people whom you would aid to indulge in behaviour that might put them at risk. Such processes are major concerns within the study of reciprocal altruism, group selection, kin selection and moral philosophy

Variants

There are also some variants of the game, with subtle but important differences in the payoff matrices, which are listed below.

Chicken

Another important non zero-sum game type is called "Chicken", named after the car racing game. Two cars drive towards each other for an apparent head-on collision - the first to swerve out of the way is "chicken". Both players can swerve to avoid the crash (cooperate) or keep going (defect). In Chicken if your opponent cooperates, you are better off to defect - this is your best possible outcome. If your opponent defects, you are better off to cooperate. Mutual defection is the worst possible outcome (hence an unstable equilibrium), but in the Prisoner's Dilemma the worst possible outcome is cooperating while the other person defects (so both defecting is a stable equilibrium). In both games, "both cooperate" is an unstable equilibrium.

A typical payoff matrix would read:

  • If both players cooperate, each gets +5.
  • If one cooperates and the other defects, the first gets +1 and the other gets +10.
  • If both defect, each gets -20.

Another example often given is that of two farmers who use the same irrigation system for their fields. The system can be adequately maintained by one person, but both farmers gain equal benefit from it. If one farmer does not do his share of maintenance, it is still in the other farmer's interests to do so, because he will be benefiting whatever the other one does. Therefore, if one farmer can establish himself as the dominant defector - i.e., if the habit becomes ingrained that the other one does all the maintenance work - he will be likely to continue to do so.

Yet another example appears among animals battling for mates or territory; read "fight" for "defect", and "submit" for "cooperate". As before, the penalty for mutual defection (mutual injury) is worse than the "sucker's payoff" for showing submission, and the result is the same: a stable dominance hierarchy. However, a new feature of this example is that the game matrix isn't symmetrical: if one animal is slightly stronger than another, that one has less to lose from defecting/fighting. In the resulting dominance hierarchy, the stronger will almost certainly end up consistently defecting, and the weaker consistently cooperating, until the perception of relative strength changes and the lower-ranked animal is ready to risk a fight. Dominant animals may occasionally pick a fight to "remind" the submissive ones of their strength and ward off such challenges, but without doing (or risking) serious damage.

Assurance Game

An Assurance game has a similar structure to the prisoner's dilemma, except that the rewards for mutual co-operation are higher than those for defection. A typical pay-off matrix would read:

  • If both players cooperate, each gets +10.
  • If you cooperate and the other player defects, you get +1 and the other player gets +5.
  • If both defect, each gets +3.

The Assurance Game is potentially very stable because it always gives the highest rewards to players who establish a habit of mutual co-operation. However, there is still the problem that the players might not realise that it is in their interests to co-operate. They might, for example, mistakenly believe that they are playing a Prisoner's Dilemma or Chicken game, and arrange their strategies accordingly.

Friend or foe

Friend or Foe is a game show airing currently on the Game Show Network. It is an example of the prisoner's dilemma game tested by real people, but in an artificial setting. On the game show, three pairs of people compete. As each pair is eliminated, they play a game of Prisoner's Dilemma to determine how their winnings are split. If they both cooperate ("Friend"), they share the winnings 50-50. If one cooperates and the other defects ("Foe"), the defector gets all the winnings and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the payoff matrix is slightly different from the standard one given above, as the payouts for the "both defect" and the "I cooperate and opponent defects" cases are identical. This makes the "both defect" a neutral equilibrium, compared with being a stable equilibrium in standard prisoner's dilemma. If you know your opponent is going to vote "Foe", then your choice does not affect your winnings. In a certain sense, "Friend or Foe" has a payoff model between "Prisoner's Dilemma" and "Chicken".

The payoff matrix is

  • If both players cooperate, each gets +1.
  • If both defect, each gets 0.
  • If you cooperate and the other person defects, you get +0 and he gets +2.

Friend or Foe would be useful for someone who wanted to do a real-life analysis of prisoner's dilemma. Notice that you only get to play once, so all the issues involving repeated playing are not present and a "tit for tat" strategy cannot develop.

In Friend or Foe, each player is allowed to make a statement to convince the other of his friendliness before both make the secret decision to cooperate or defect. One possible way to 'beat the system' would be for a player to tell his rival, "I am going to choose foe. If you trust me to split the winnings with you later, choose friend. Otherwise, if you choose foe, we both walk away with nothing." A greedier version of this would be "I am going to choose foe. I am going to give you X%, and I'll take (100-X)% of the total prize package. So, take it or leave it, we both get something or we both get nothing." (As in the Ultimatum game.) Now, the trick is to minimise X such that the other contestant will still choose friend. Basically, you have to know the threshold at which the utility he gets from watching you get nothing exceeds the utility he gets from the money he stands to win if he just went along.

This approach has not yet been tried in the game; it's possible that the judges might not allow it, and that even if they did, inequity aversion would produce a lower expected payoff from using the tactic. (Ultimatum games in which this approach was attempted have lead to rejections of high by unequal offers – in some cases up to two weeks wages have been turned down in preference to both players receiving nothing.)

Business decisions

An occurrence of the prisoner’s dilemma in real life can be found in business. Two competing firms must decide how many resources to devote to advertisement. The effectiveness of Firm A’s advertising is partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B choose to advertise during a given period the advertising cancels out, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising.

References

  • Axelrod, Robert and Hamilton, William D. (1981). "The Evolution of Cooperation". Science, 211:1390–1396.
  • Axelrod, Robert (1984). The Evolution of Cooperation.
  • Axelrod, Robert (1997). The Complexity of Cooperation. Princeton University Press. ISBN 0691015678.
  • Grofman and Pool (1975). "Bayesian Models for Iterated Prisoner's Dilemma Games". General Systems 20:185–94.
  • Poundstone, William (1992). Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb. Doubleday. ISBN 0385415672. A wide-ranging popular introduction, as the title indicates.
  • Rapoport, Anatol and Chammah, Albert M. (1965). Prisoner's Dilemma. University of Michigan Press. An account of many experiments in which the psychological game Prisoner's Dilemma was played.
  • Verhoeff, Tom (1998). "The Trader's Dilemma: A Continuous Version of the Prisoner's Dilemma". Computing Science Notes 93/02, Faculty of Mathematics and Computing Science, Technische Universiteit Eindhoven, The Netherlands.
  • New Tack Wins Prisoner's Dilemma (http://www.wired.com/news/culture/0,1284,65317,00.html) (from Wired.com)

See also

External links

  • A good introduction (http://www.cdam.lse.ac.uk/Reports/Files/cdam-2001-09.pdf) to game theory with a terse and accurate treatment of the prisoner's dilemma complete with a glossary of defined terms.
  • Play the iterated prisoner's dilemma online (http://www.gametheory.net/Web/PDilemma/)
  • View research centred on the prisoner's dilemma online (http://www.ic.sunysb.edu/Stu/wbraynen/ptft/)
  • See this critique (http://www.mises.org/fullstory.aspx?Id=1404) on economists, including those of the "contractarian" school, who have used certain game theoretic results (e.g. the Prisoner's Dilemma) to justify state intervention to "improve" upon the outcome of autonomous individuals. After all, if individuals can't manage to cooperate on their own, they may need an outside agent to compel an outcome best for everyone.
  • William Thomas (http://www.objectivistcenter.org/objectivism/q-and-a-answer.asp?QuestionID=82) argues that the "prisoner's dilemma" is not the right game to model real life interaction on most issues, but that the iterated prisoner's dilemma is more commonly encountered and realistic.

Notes

Template:Mnb The reason that visual "tells" are presumed not to matter here is that anyone able to spot them would understand that the dominant strategy is always defection and would consistently play this. Although it is possible that “telegraphing” cooperation is still a factor, the game is ideally played anonymously with players separated by screens.

Template:Mnb Template:Book reference - see Ch.29 The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation.

Template:Mnb As well as being an explanation for the lack of petty-theft, magical thinking has been used to explain such things as voluntary voting (where a non-voter is considered a free rider). Potentially, it might be used to explain Wikipedia contributions: Text may be added under the assumption that if contributions are not made, then similar people will also fail to contribute (i.e. arguing from effect to cause). Alternatively, the explanation could depend on expected future actions (and not require a magical connection). Modelling future interactions requires the addition of the temporal dimension, as given in the Iterated prisoner’s dilemma section.

Template:Mnb For example see the 2003 study “Bayesian Nash equilibrium; a statistical test of the hypothesis” (http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf) for discussion of the concept and whether it can apply in real economic or strategic situations (from Tel Aviv University).

Template:MnbThe 2004 Prisoner's Dilemma Tournament Results (http://www.prisoners-dilemma.com/results/cec04/ipd_cec04_full_run.html) show Gopal Ramchurn’s University of Southampton strategies in the first three places, despite having fewer wins and many more losses than the GRIM strategy. (Note that in a PD tournament, the aim of the game is not to “win” matches - that can easily be achieved by frequent defection). It should also be pointed out that even without implicit collusion between software strategies (exploited by the Southampton team) tit-for-tat is not always the absolute winner of any given tournament; it would be more precise to say that its long run results over a series of tournaments outperform its rivals. (In any one event a given strategy can be slightly better adjusted to the competition than tit-for-tat, but tit-for-tat is more robust). The same applies for the tit-for-tat-with-forgiveness variant, and other optimal strategies: on any given day they might not 'win' against a specific mix of counter-strategies.

Template:Mnb This argument for the development of cooperation through trust is given in The Wisdom of Crowds , where it is argued that long-distance capitalism was able to form around a nucleus of quakers, who always dealt honourable with their business partners. (Rather than defecting and reneging on promises – a phenomenon that had discouraged earlier long-term unenforceable overseas contracts). It is argued that dealings with reliable merchants allowed the meme for cooperation to spread to other traders, who spread it further until a high degree of cooperation became a profitable strategy in general commerce.

Topics in game theory
Evolutionarily stable strategy - Mechanism design - No-win - Winner's curse - Zero-sum
Games: Prisoner's dilemma - Chicken - Stag hunt - Ultimatum game - Matching pennies ...
Related topics: Mathematics - Economics - Behavioral economics - Evolutionary biology - Evolutionary game theory - Population genetics - Behavioral ecology
[ edit ]
de:Gefangenendilemma

es:Dilema del prisionero fr:Dilemme du prisonnier it:Dilemma del prigioniero he:דילמת האסיר nl:Prisoner's dilemma ja:囚人のジレンマ pl:Dylemat więźnia pt:Dilema do prisioneiro fi:Vangin dilemma vi:Song đề tù nhân

zh:囚徒困境悖论

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools