Game theorists are remarkably bad at game theory. This isn't merely small world fallacies, this is thinking they're done and just not being done.

So, regarding the prisoner's dilemma, (page 47 on my pdf reader)

"The "paradox" is now the following: as soon as the number of rounds(Etc.)Nbecomesknown, the above reasoning completely collapses! For clearly, neither player can have anything to lose by defectingin the very last round, when the other player no longer has any chance to retaliate. So if both players are rational, then that is exactly what they will do. But if both playersknowthat both of them will defect in roundN, then neither one has anything to lose by defecting in roundN- 1either."

"In 1985, Neyman [99] proposed an ingenious resolution of this paradox. Specifically, he showedthat if the two players have sufficiently small memories"No. Well, yes, that is true, and humans clearly have finite memories.

The defection strategy is irrational even if the players have infinite memory and are perfectly rational.

If Player One defects on the last step, there is indeed no reason not to defect on every step. Defecting on only the last step isn't an option.

**The available options are: defect on every step, cooperate on every step.**Player One would prefer to cooperate on every step. Ergo, Player One concludes they can't defect on the last step. The only question is whether Player Two is going to conclude they need to defect on every step or cooperate on every step. By definition Player Two would prefer to cooperate. By definition, Player Two knows that Player One knows that Player Two knows the only other stable strategy is defecting on every step.

Player Two will cooperate in every step.

**Somebody please tell me I'm wrong, that some obscure game theorist has figured this out, not just me.**

"But in real life...!" Yes, perfectly rational perfect information agents don't exist in real life, I know. However, when adding/superposing the imperfections of real life, it is critical to perturb the real ideal, rather than a misunderstanding of the ideal.

Admittedly the feedbacks are a bit mind-bending. Perfect rationality lets you eliminate the feedbacks, though.

Playing a twenty step-game, One and Two get to step 19, having cooperated all the way. The past, now immutable, shouldn't affect their next choice - one can defect.

Zeroth problem, (Yudkowsky is correct here) to get it out of the way: if Player One concludes they can defect, they know, as Player Two is just as good at game theory as One is, that Player Two will also defect. Again, deviation isn't a real option, only MAD, so they cooperate.

First problem: in the past, Player One knew that, if they cooperated for 19 steps, they will want to defect on the last one. Thus Player One knows that Player Two knows that if they cooperate for 19 steps, Player One will want to defect on the last one. If Player Two knows Player One will defect on step 20, Player Two will defect on every step, thus making it impossible to cooperate for the first 19 steps.

**In short, Player Two knows everything Player One is thinking, and vice-versa**. In all cases,

**betrayal is not an option**. Thus, they will always choose to cooperate.

Real life relevance: higher IQ people commit less crime (objective) and generally deviate less (less objective, but the same dynamic). This is at least partly because higher IQ people more closely approximate perfectly rational agents. They, as Player One, know (or think, because they assume the other guy has the same IQ) that Player Two will realize it if they plan to deviate.

## 4 comments:

Brilliant insight.

I am player one. If player two was going to defect on the very last round, he would assume I was going to defect on the very last round, so ... would defect on the very first round.

So, I "irrationally" play cooperate on the first round. If he also "irrationally" plays cooperate on the first round, chances are he is going to "irrationally" cooperate on every round.

As long as we emphasize that the point is it's not irrational. I'm using perfectly boring bog-standard rationality...and apparently game theorists cannot, in general.

You have just described what happens when both players are running TDT and have common knowledge of this fact. This is not standardly assumed in game theory. Since most agents do not, in fact, run TDT, and we can't be sure which ones do, this is not on the face of it unreasonable.

I have spent zero time studying TDT and therefore it's not TDT.

The core is bog standard Aristotlean logic. At least 2400 years old. Any apparent differences are merely polish.

Alternatively TDT spends much more time pretending to be novel than it spends innovating.

Post a Comment