The prisoner's dilemma is a problem in game theory first described by the Canadian-born Princeton mathematician Albert Tucker (1905–1995) in 1950, while addressing an audience of psychologists at Stanford University, where he was a visiting professor. It runs along these lines: Al and Bob have been arrested for holding up the Anyapolis State Bank and have been put in separate cells. Each cares a lot more about his personal freedom than he does about his accomplice's welfare. A clever prosecutor makes the following offer to each. "You may choose to confess or remain silent. If you confess and your accomplice remains silent I'll drop all charges against you and use your testimony to ensure that your accomplice does serious time. Likewise, if your accomplice confesses while you remain silent, they'll go free while you do the time. If you both confess I get two convictions, but I'll see to it that you both get early parole. If you both remain silent, I'll have to settle for token sentences on firearms possession charges. If you wish to confess, you must leave a note with the jailer before I come back tomorrow morning." The "dilemma" faced by the prisoners is that, whatever the other does, each is better off confessing than remaining silent. But the outcome obtained when both confess is worse for each than the outcome they would have obtained had both remained silent!
Tucker's paradox was based on puzzles with a similar structure that had been devised by Merrill Flood and Melvin Dresher as part of the Rand Corporation's investigations into game theory (which Rand pursued because of possible applications to global nuclear strategy). Flood and Dresher hadn't published much about their work, but the prisoner's dilemma attracted an enormous amount of attention in subjects as diverse philosophy, biology, sociology, political science, and economics, as well as game theory itself. A common view is that the puzzle illustrates a conflict between individual and group rationality. A group whose members pursue rational self-interest may all end up worse off than a group whose members act contrary to rational self-interest. More generally, if the payoffs aren't assumed to represent self-interest, a group whose members rationally pursue any goals may all meet less success than if they hadn't rationally pursued their goals individually.