Where Alaskans and Artificial Life Agree
Some of my friends at the Geophysical Institute hold unquestioning views about the natural superiority of physics among the sciences. I've even overheard a couple of them agreeing that "social science" is an oxymoron--one of those self-contained contradictions like "jumbo shrimp."
But the social and behavioral sciences are becoming more heavily mathematical, surely something of which physicists should approve. And behavioral scientists are making greater use of the capabilities of computers. In fact, some of their study subjects now exist nowhere else but in machines.
Furthermore, if they have long enough, those computer-dwelling subjects start behaving like Alaskans--some Alaskans, anyhow.
Appropriately enough, the study begins with a game. In the original form of Prisoner's Dilemma, pairs of players are to envision themselves as criminals nabbed by the police. They are jailed separately, and each is offered a reward for turning informer. If only one of them informs, the squealer goes free, while the squealed-upon partner gets a long sentence. If neither squeals, they both get a short sentence. If both do, they each get long sentences.
Social scientists used variants of the game, with point rewards instead of punishments, as a way to study the evolution of behavior by identifying what actions give the best chance of survival--the greatest rewards--over time. What are the advantages of cooperating? Of defecting and thus betraying one's partner? What strategies offer the highest return? Accurate answers came only after so many repetitions that human players grew nearly berserk with boredom. But computers don't get bored, and the game could be replicated by a game-repeating program.
Now the game in the machine allowed player-programs to evolve over generation upon generation. The players with the greatest number of points after one round got to duplicate themselves so there were more of them during the following round. As evolutionary schemes go, it wasn't very complicated, but it did offer some useful insights.
By the 1970s, University of Michigan political scientist Robert Axelrod had identified one highly successful approach in the game: Partners cooperate until someone defects, then retaliate by defecting back. This tit-for-tat strategy wins eventually, as long as life is simple.
However, life isn't simple, so scientists kept adding complexities to the programs to make the game more realistic. In 1993, University of Iowa mathematician Ann Stanley showed that permitting players to refuse to play with certain partners changed the odds to favor cooperation.
On the face of it, that sounds like something most human youngsters learn on their first day in the playground. If you don't have to play with cheaters, you can play more nicely.
At the University of California in San Diego, philosopher Philip Kitcher and computer scientist John Batali found that if the initial randomly set conditions had too many players defecting, cooperation would never arise in the course of the game's evolution.
But Batali and Kitcher then added another lifelike feature to the game. They permitted players to opt out, which would win them more points against a defector than would defecting back. "If you live in a nasty world, go live in the woods," is how Batali describes this choice.
And, to me, that sounds like the choice of a lot of Alaskans.
According to the San Diego scientists, after about 100 generations the playing field is completely dominated by players who chose to opt out. The noncooperators are not so much beaten as starved out, because no one will play with them. Finally, cooperation comes back--which leaves room for a few defectors to begin winning, and a new cycle to start.
It may sound like something from Walden Pond rather than from a set of equations, but it seems Alaskanly appropriate: evolution in Prisoner's Dilemma suggests that in order to cooperate comfortably, we have to be able to get away from one another if we choose. Even physicists can live with that.