Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: The E on July 23, 2013, 07:33:04 am
-
A research team in Germany has tested the prisoner's dilemma on actual prisoners (http://au.businessinsider.com/prisoners-dilemma-in-real-life-2013-7).
For those that do not know, the Prisoner's Dilemma is a classical model in game theory.
Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail.
Surprisingly, for the classic version of the game, prisoners were far more cooperative than expected.
Menusch Khadjavi and Andreas Lange put the famous game to the test for the first time ever, putting a group of prisoners in Lower Saxony’s primary women’s prison, as well as students through both simultaneous and sequential versions of the game.
The payoffs obviously weren’t years off sentences, but euros for students, and the equivalent value in coffee or cigarettes for prisoners.
They expected, building off of game theory and behavioural economic research that show humans are more cooperative than the purely rational model that economists traditionally use, that there would be a fair amount of first-mover cooperation, even in the simultaneous simulation where there’s no way to react to the other player’s decisions.
And even in the sequential game, where you get a higher payoff for betraying a cooperative first mover, a fair amount will still reciprocate.
As for the difference between student and prisoner behaviour, you’d expect that a prison population might be more jaded and distrustful, and therefore more likely to defect.
The results went exactly the other way for the simultaneous game, only 37% of students cooperate. Inmates cooperated 56% of the time.
On a pair basis, only 13% of student pairs managed to get the best mutual outcome and cooperate, whereas 30% of prisoners do. In the sequential game, way more students (63%) cooperate, so the mutual cooperation rate skyrockets to 39%. For prisoners, it remains about the same.
What’s interesting is that the simultaneous game requires far more blind trust out from both parties, and you don’t have a chance to retaliate or make up for being betrayed later. Yet prisoners are still significantly more cooperative in that scenario.
Obviously the payoffs aren’t as serious as a year or three of your life, but the paper still demonstrates that prisoners aren’t necessarily as calculating, self-interested, and un-trusting as you might expect, and as behavioural economists have argued for years, as mathematically interesting as Nash equilibrium might be, they don’t line up with real behaviour all that well.
This has some interesting ramifications.
-
I hope these aren't students studying game theory, or their education has been sorely lacking if they haven't heard of superrationality (http://en.wikipedia.org/wiki/Superrationality).
(On the other hand, maybe they know their fellow students too well...)
-
It's not necessarily true that in a sequential game you get a higher payoff by betraying the other player.
It's only when it's a sequential game AND you know when the game is going to end. Otherwise, backward induction doesn't work. Then, Tit for Tat remains the best known strategy.
I'll have to read the paper tomorrow to see how the experiments were played out.
-
Personally, I'm glad to see this research outcome. A lot of people have suspected this is indeed so, beginning from the first mathematical logics classes.
It is sort of surprising that this sort of thinking floated as long as it did, and I'm glad to see it really doesn't work that way in real world.
-
The title of the article is misleading when compared to the actual paper. In the paper, their predictions match their results, so it's not like this was an unexpected result but rather that the general public may be surprised by it. They refer to an earlier work also using inmates but with another game that yielded similar cooperative results.
Additionally, it seems they did a non-iterative version of the game, but I don't see information regarding what the participants knowledge of this, i.e. they might think this was just the first stage of a larger game.
Then again, for both participants, especially inmates, the game doesn't actually end with this, since they will have to interact with the other participants beyond the game, even if they may not know their actual identities, which the authors (and pretty much everyone) seem to acknowledge as social considerations.
A funny moment in the paper comes when they try to correlate various factors and this tidbit comes up:
When controlling for all of our socio-demographic variables in column III of Table 4, interestingly, all demographic control variables remain insignificant with one exception: we find that coffee drinkers are more cooperative than those who do not drink coffee. We refrain from speculating about the underlying reasons.
-
People do not behave like how John Nash predicted. There was a famous study that concluded that the only two groups of people who behaved "rationally" as determined by Nash equilibriums and so on were psychopaths and..... economists.
-
The conclusion I draw from this is that the "rational actor" model of human behaviour is severely overrated.
-
People do not behave like how John Nash predicted. There was a famous study that concluded that the only two groups of people who behaved "rationally" as determined by Nash equilibriums and so on were psychopaths and..... economists.
The problem is not with the concept of Nash equilibria, it's with the payoffs.
-
The problem, as far as I recall it, is how simplistic a model they made out of humans. Humans are not as simple as "rational actors". And this has a ****ton of consequences regarding economic models, justice, education, etc.,etc.,etc.
-
Humans ARE rational actors, the problem is that you don't know what payoffs they are trying to optimize.
-
That's what's called a deepity. Bloodily obvious in one hand, bloodily wrong in the most important sense. It's obvious because if we are indeed machines built out of atoms and so on, well then doh, we are the equivalent of "calculating machines", so we do "calculate" outcomes, payoffs, etc. OTOH, in the most important sense, that's just ridiculously wrong. We are not embebbed (or designed) with teleological motives inside our "calculators", we are accidents of genetic and cultural evolution with zillions of tiny operators and agents directing us with their own specific agendas and algorithms.
There's no simple "rationale" on this system, only a chaotic system that has been stabilized by the forces of natural selection.
So no, we aren't rational actors. At least not in the sense that we are discussing here (politically, economically, etc.).
-
I'm sorry, and here I was thinking in a thread about a paper regarding game theory we were using the word rational in the context of game theory, i.e. actors who act by maximizing their payoffs.
Silly me.
-
Yeah that is the meaning of the word we are using here, and no we are still not "rational agents" in this sense.
-
I'm not sure you really understand the concept of rationality then.
Being a rational agent doesn't imply making a perfect choice.
-
Where did I express the idea that it does? If you didn't understand my response up there say so bluntly instead of resorting to snide remarks.
-
Humans attempt to maximize their payoffs given the information available to them.
The problem with giving a game or something else for them to solve doesn't imply that they will make the game theoretical rational choice because that's a sub-game for their larger game (aka life). For simplicity's sake we assume that these sub-games are independent from the other sub-games in their lives, but that is not necessarily true. And people's payoffs are really hard to grasp, sometimes even for themselves.
The fact that you throw around "Oh, we are just biological machines" or "Oh, our motives are not really ours, but an accident" is immaterial.
Finally, I don't disagree with you because I don't understand you, I disagree with you because I do.
-
Congrats for understanding me, now could you please give other people the generosity of considering they aren't as dumb as you make them to be? Thanks, just wanted that to get out of the way.
Now, of course I *do* understand what you are saying. It's a belief, with some hints here and there that it may be true in some regards. I think it's mostly a myth from the second half of the twentieth century, when Game Theory was the **** (and paranoid guys like John Nash got their fame), for they seemed to give sufficiently simple insights to the cold war (Rand corp, etc.) and to market economies (the neo-liberal utopias of the 80s). It was of course then found out that the studies had errors or that they were too simplistic. The belief was however that while the theory was simplistic, the core ideas behind them were really great and given enough time and research we would be able to pin down human behavior.
This whole framework is however extremely flawed in my POV. It assumes people will try to achieve "goals" that are measurable, that they will always "rationalize" strategies in order to get what they want, that emotions can be discarded as "distractions" or pure machiavellian mechanisms of manipulating others, etc.,etc. IOW, for "Game Theory" to be relevant regarding Humans, you basically have to redefine humans as Psychopathic paranoid jerks, whose signs of altruism are always (at best) just remnants of "enlightened egotism" at heart. This is not how I perceive humans to be, so I consider the idea flawed and silly.
However, I regard these experiments as very important, for they do teach us how do we exactly deviate from this simplistic model of humans.
-
This puts me in mind of a couple of disputes I've seen over whether or not energy is conserved in general relativity (bear with me!). The answer is basically the same in both cases: yes, by appropriately extending the scope of 'energy' being considered you can make any system of physics energy-conservative; and similarly, by appropriately extending the scope of a rational actor you can use it to explain any behaviour you like. But both statements are, as Luis said, irrelevant to the actual conversation at hand.