October 07, 2006

Goat Rope is pleased once again to feature the erudite comments of Dr. Molly Ringworm, a Weimerauner dog and Visting Professor of Literary Theory at the Goat Rope Farm School of Cultural Studies.

Be sure to check out Dr. Ringworm's previous commentaries of the past four weekends in the Goat Rope archive. Topics have included post-modernism, deconstructionism, semiotics, and structuralism/post-structuralism.

This series has been part of Goat Rope's ongoing efforts to raise the level of popular discourse and highlight the cultural and intellecual trends of our times. Dr. Ringworm's current topic is a summary of the work of the late French philosopher Michel Foucault.

A DOG EXPLAINS FOUCAULT

OK, well I think this may be my last comment for a while cause I have to go home soon. I'm going to kind of miss this since I don't get to talk about this stuff much at home. I live with a Labrador, see?

I mean like they're cute and all and don't get cold and love to swim and all that but boy they are dumb as a sledtrack. Have you ever tried to talk about weird dead French guys with a Lab? It doesn't work.

Speaking of dead weird French guys, OK there was this guy Foucault. He wrote a lot of weird stuff about knowledge, power, madness and discipline. I mean he was really into discipline. I mean...well, nevermind.

So like he wrote some stuff about knowlege, words, and things and it was like people sometimes think one way and organize everything that way and then it all just changes for no special reason.

He was also into the whole power/knowlegde thing, which means that somebody has to have power to be able to label other people and call it knowledge. Like if someone says "Good dog" or "Bad dog" they usually can yell at you or take away your squeaky toys or make you go in your crate.

I don't like my crate much. One time I even peed in it, which is bad for a dog.

Then like he wrote this whole book about madness and who got to decide who was crazy. I kind of think all you guys are pretty crazy.

In his crazy book, he said like in the Middle Ages people were all freaked out about leprosy then they just got over that and started worrying about who was crazy. At first crazy people were OK cause maybe they were like touched by God or something (did you ever think that God is dog spelled backward?) but after a while they got nasty with crazy people and put them in bad places away from everybody else and were mean to them. Then after that, they stopped being real mean to them but tried to make them act normal.

Foucault didn't like that either. Like you just can't be nice to some people.

Then he did this whole book about punishment where like in the old days your kings used to just cut people to pieces and tear out their livers and stuff like that. Then in the modern time that didn't happen so much but instead everybody got spied on all the time by everybody else.

He said it was like a panopticon, which is like a thingie where they can watch you all the time but you can't watch them. And he thought the whole modern world was kind of like a big prison. It's like everybody has to be on a leash or in their crate all the time.

October 06, 2006

Caption: The lessons of game theory can help one avoid some sticky situations.

This is the final post on some useful aspects of game theory in trying to promote cooperation and achieve positive goals. If this is your first visit, please scroll down to the four earlier posts.

The basic ideas of reciprocity or TIT FOR TAT as a good strategy in Robert Axelrod's Evolution of Cooperation are pretty simple and pretty ancient.

According to the Analects, when Confucius (551-479 BC) was asked "Is there any single word that could guide one's entire life?" The sage replied "Should it not be reciprocity? What you do not wish for yourself, do not do to others."

According to psychologist Jonathan Haidt, author of The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom, "Reciprocity is a deep instinct; it is the basic currency of social life."

While the word "instinct" is a bit strong for El Cabrero, Haidt argues that we are biologically predisposed to it:

...what is really built into the person is a strategy: Play tit for tat. Do to others what they do unto you. Specifically, the tit-for-tat strategy is to be nice on the first round if interaction; but after that, do to your partner whatever your partner did to you on the previous round. Tit for tat takes us way beyond kin altruism. It opens the possibility of forming cooperative relationships with strangers.

But however deep the roots of reciprocity are in history and biology, and however simple the TIT FOR TAT strategy is, people are pretty good at screwing it up or not getting it to start with. Here are some ways:

*ALL D ALL THE TIME. The folks in the Bush administration have to be among the all time losers in the Prisoner's Dilemma tournament. Their strategy to anyone outside the ruling clique seems to be ALL D, which is to defect on everyone else all the time (even if they try to cooperate). This is one reason US foreign policy is such a, yes, goat rope.

In the wake of the Sept. 11 terrorist attacks, many countries, including those such as Iran which had troubled relations with the US, made overtures of cooperation. Rather than reciprocating cooperation, the administration responded with imperial arrogance, insults, bullying, an unnecessary war opposed by most of the world, backpedaling in international agreements, etc. They have pursued similarly polarizing strategies domestically, even with leaders of the U.S. military.

(Uhhh...when you defect all the time to everybody, you shouldn't be too surprised if they start doing the same to you.)

And while TIT FOR TAT calls for responding to provocations, the administration fixated on invading a nation which had nothing to do with Sept. 11 while bungling efforts to deal with the actual sources of terrorist threats (which requires international cooperation).

However, there are other ways to screw up reciprocity besides ALL D.

*EXCESSIVE PAYBACK OR NONE AT ALL. Psychologist Roy F. Baumeister noted that a "magnitude gap" exists between the perceptions of victims and perpetrators. Typically, what for perps is no big deal is a huge deal to people on the receiving end. This means that when victims respond to the perp's actions, revenge is often out of proportion to the original offense, making violence not just cycle but spiral. Even if retaliations are measured, in the Prisoner's Dilemma scenario reciprocity can lead to an endless chain of defections.

But letting people get by with nasty behavior doesn't do anyone any favors either. Game theory shows that it is necessary to respond to defection (or aggression) to discourage bad behavior, but it doesn't say just how to do so in a given situation. That takes skill, wisdom, and moderation.

The ancient Chinese sage Lao Tzu pointed in the right direction when he suggested that one should achieve results without excessive force, using only as much energy as necessary and doing the least possible harm. That's easier said than done, but it's probably more practical in the end than unconditional cooperation in the face of unprovoked defection or excessive retaliation for it.

*ACTING LIKE IT'S ENDGAME WHEN IT ISN'T. We'd all probably be better if we acted as if we were playing a long game with the people and groups in our lives. The incentives for sticking it to someone drop dramatically if you have to deal with the same person over and over and they can choose to retaliate. That long term perspective helps people to "fight fair" when they disagree.

(That is especially necessary in El Cabrero's beloved state of West Virginia, where people know each other and don't forget a trick.)

Conversely, short term, tunnel-vision thinking is a well-trod road to disaster in just about every aspect of human endeavor.

That's about it from Goat Rope Farm. Y'all play nice. El Cabrero will do the same. At least at first.

October 05, 2006

This is the fourth post in a series about practical lessons from game theory as discussed by Robert Axelrod in The Evolution of Cooperation.

(Short version: there are some.)

If this is your first visit to Goat Rope, please scroll down to the previous entries.

To recap, Part 1 introduced the topic of game theory and asked the following question: "Under what conditions will cooperation emerge in a world of egotists without central authority?"

Part 2 introduced the Prisoner's Dilemma, a scenario used to study how people choose to cooperate or defect. Like much of real life and unlike a chess game, the Prisoner's Dilemma doesn't have to be a win/lose proposition. Both parties could benefit or both could lose.

Part 3 was about TIT FOR TAT, the most successful strategy for eliciting cooperation when players must deal with each other again and again. TIT FOR TAT is based on simple reciprocity. It starts out nice, responds when provoked, and resumes cooperation when the other player does.

Today's post is about how cooperation can be promoted. Admittedly, cooperation itself is morally neutral since people can cooperate to do all kinds of nasty things. Still, it's hard to get anything positive done without it.

Probably more than anything else, repeated interaction with others is the best climate in which cooperation can grow. If people think they will never meet again, they have less incentive to cooperate and will be more likely to defect.

This is also true when the game about to end. To use some common examples, a lame duck politician will have trouble setting the agenda (this is not always a bad thing); a gang leader or dictator may command less obedience if he or she is seen as losing power, becoming terminally ill, etc.; a business known to be about to fold will have trouble collecting bills.

As Axelrod notes, "The foundation of cooperation is not really trust, but the durability of the relationship." For example, during the Cold War, both sides flirted with brinkmanship but neither went over the edge since they knew they were playing a long game.

Destructive behavior is often associated with short-term thinking. Prudence should dictate that one should never discount the possibility that one may meet the other party again...and that they will remember.

Here are some ways to promote cooperation:

*Emphasize the future. Axelrod advises those who want to promote cooperation to "enlarge the shadow of the future" by promoting more frequent and durable interactions.

*"Decompose" complex negotiations. (Game theory is all about cool jargon). Decomposition in this context just means breaking down complex negotiations into many small steps or stages so that reciprocity has more time to develop.

*Change the payoffs. If there is less incentive for people to defect, they may do it less. This is one of the main functions of government, policies, rules, coalitions, and even informal social norms. Having state police patrol the highways reduces the payoff for those who defect by driving recklessly.

(If you think about it, one of the social functions of religion is to convince people that the game is long and that the payoffs for defection are not that great in the end.)

*Teach people to care about others. As Axelrod puts it,

In game theory terms, this means that parents try to shape the values of children so that the preferences of the new citizens will incorporate not only their own individual welfare, but the welfare of others. Without a doubt, a society of such caring people will have an easier time attaining cooperation among its members, even when caught in an iterated Prisoner's Dilemma.

*Teach reciprocity. While this means cooperating when the other party does, it also means responding when provoked. Unconditional cooperation or letting the other player get away with nasty behavior "can not only hurt you, but it can hurt other innocent bystanders with whom the successful exploiters will interact later."

By contrast, responding when provoked "actually helps not only oneself, but others as well. It helps by making it hard for exploiting strategies to survive." Communities that practice reciprocity are in effect self-policing since anti-social behaviors are not rewarded and are thus less attractive.

(The trick, of course, is responding in a measured way that doesn't trigger endless defections or widen the spiral of conflict.)

*Improve recognition abilities. This means not only recognizing cooperation and defection when they happen, but also recognizing others one has interacted with in the past so that prior actions can be considered.

October 04, 2006

This is the third post in a series about how cooperation can develop in world motivated largely by self-interest. The applications from game theory can be very useful in getting things done in all kinds of settings.

If this is your first visit, please scroll down to the two earlier entries. The first introduced the topic and the second explained the Prisoner's Dilemma, a scenario often used to study situations where two parties can chose to cooperate or defect.

The best strategy here depends on whether this is a one-time game (or a game with a fixed number of rounds) or whether players will meet an indefinite number of times. In a one-time meeting, the temptation to defect is so high and the punishment for being a sucker is so severe that there is no incentive to cooperate. The least risky strategy is to defect.

(Translation: one-time interactions don't bring out the best in people. This is one more reason why it might not be a good idea to buy a new computer or a magic whistle from a total stranger who just appeared on the sidewalk.)

However, the incentives change if there are an indefinite number of interactions, as is often the case in real life. This situation is called the iterated Prisoner's Dilemma. Despite the risks, both parties can gain more by cooperating than defecting.

In The Evolution of Cooperation, Robert Axelrod describes computer tournaments held to test strategies in an iterated Prisoner's Dilemma scenario. Surprisingly, the best overall performer was also the shortest and simplest, TIT FOR TAT, which starts out by cooperating and then does whatever the other player did on the previous move.

TIT FOR TAT has the following advantages: it is nice, provocable, forgiving, and clear. Specifically,

*It's nice because it is never the first to defect (strategies that defect first are mean). Two players both using TIT FOR TAT can both do very well. In fact, "TIT FOR TAT succeeds without doing better than anyone with whom it interacts. It succeeds by eliciting cooperation from others, not by defeating them."

*It's provocable because it retaliates when the other party defects. This makes the consequences of defection unpleasant and encourages cooperation.

*It's forgiving because it resumes cooperation as soon as the other party does.

*It's clear because the other player doesn't have to be a rocket scientist to figure it out. In fact, a player using TIT FOR TAT has little to lose by proclaiming the strategy in advance.

Computer modeling shows that communities using TIT FOR TAT can withstand invasion by mean strategies and that a small number of cooperators entering a larger population of meanies can still do pretty well. There are also examples from nature and history of TIT FOR TAT-like strategies achieving cooperation without friendship or understanding.

October 03, 2006

This is the second post in a series about how one aspect of game theory can shed very practical light on many kinds of human (yes, and animal) interactions.

The series is inspired by the insights of Robert Axelrod's The Evolution of Cooperation.

One scenario often used to study situations in which one party doesn't know how another will react is the Prisoner's Dilemma. Axelrod describes it thus:

The story is that of two accomplices to a crime are arrested and questioned separately. Either can defect against the other by confessing and hoping for a lighter sentence. But if both confess, their confessions are not as valuable. On the other hand, if both cooperate with each other by refusing to confess, the district attorney can only convict them on a minor charge. Assuming that neither player has moral qualms about, or fear of, squealing, the payoffs can form a Prisoner's Dilemma.

This game is interesting because it is a common situation in real life in this respect: both players can mutually benefit each other or one can take advantage of the other or neither can cooperate.

It sort of corresponds to win/win, win/lose, or lose/lose.

This game is different from chess or, say, jiu jitsu, where there's nearly always a winner and a loser. In those games, you can always expect your opponent to try to do the most harm possible, either by putting you in checkmate or doing unspeakable things to your arms, legs or carotid arteries.

With the Prisoner's Dilemma, the other player might be nasty or nice and you have no idea which one it will be.

Here's the scoring system for the Prisoner's dilemma:

*If both parties cooperate (win/win), they get a reward of 3 points (R).

*If both parties defect (lose/lose), they get 1 point as a punishment for defection (P).

*If one party defects and the other cooperates, the first gets 5 points--this is the temptation to defect (T). The other party who cooperated and got stung gets the sucker's payoff of 0 (S).

Payoffs in order are thus: T=5, R=3, P=1, S=0.

This means the highest reward (T) goes to someone who takes advantage of someone else and the lowest (S) goes to the one who is exploited. But both can do pretty well by cooperating (R) and less well while defecting (P).

So what strategy works best? That is the subject of the next post.

But here's a hint: it depends on how many times you play the game with the same person.

October 02, 2006

The cat and the two dogs have developed a system of mutual cooperation based on laziness. This could be the key to world peace.

El Cabrero has recently become addicted to the HBO series "Deadwood," which, among other things is about how cooperation can develop in a basically lawless situation.

This is also the subject of one of the best and most practical social science books I've ever read: The Evolution of Cooperation by Robert Axelrod, which came out in 1984.

Axelrod's study of one aspect of game theory has enormous practical applications in international relations, war and peace, domestic politics, community organizing, and many other aspects of public and private life.

If you have to deal with other people (or with goats), it's worth a little of your time.

(It's also one of many books that the folks at the Bush administration never bothered to read.)

If you are like El Cabrero, mention of game theory conjures up Cold War images of mad scientists at the RAND Corporation playing around with nuclear war scenarios, but this work (and the next few Goat Rope posts) focuses on one basic question:

Under what conditions will cooperation emerge in a world of egotists without central authority?

More concretely, when should a person or group cooperate with others who may or may not reciprocate? How can you promote cooperation for mutual benefit? When should one defect or do the opposite? Should a person or group continue to cooperate when the other part doesn't reciprocate? How should one respond when the other party defects without creating and endless spiral of retaliation?

On the one hand, developing cooperation in a world of potentially dangerous egotists is a pretty tall order. On the other hand, it happens all the time in human and natural life.

Species as simple as bacteria develop cooperative relations with other organisms under certain conditions, presumably in the absence of consciousness and mutual understandings.

Countries, criminals, coalition allies and even opposing armies have been known to develop reciprocal cooperative relationships with rivals under very adverse conditions.

One very useful tool social scientists have developed to study situations like this is called the Prisoner's Dilemma, which will be the subject of the next Goat Rope post. Trust me, it's worth it.