Friday, September 2, 2011

Nature of Human Altruism


Some thoughts I jotted down as I was reading a fascinating evolutionary psychology and behavioral economics article on human altruism: Fehr and Fischbacher, 2003. The nature of human altruism. Nature 425, 785–791.
In the abstract, the authors say that “current gene-based evolutionary theories cannot explain important patterns of human altruism,” and that gene-culture co-evolution will be invoked as an alternative explanation.
I am excited to see the explanation they deliver. I tend to be skeptical of this sort of thing because, well, evolution operates at the level of the gene. It is simply the nature of the process. In a given environment, a gene that has properties that lead to increased copies of itself will proliferate, and a gene that has properties that cause it to become less abundant in the gene pool will decline. That is the primary level on which evolution operates; anything else is secondary. That is not to say that other levels are ineffectual, simply that they are emergent from, and constrained by, that primary level of evolution.
An analogy: suppose that instead of looking to evolution to explain how altruism came to be, you wish to look at the movement of matter to explain how a bunch of sycamore branches and leaves got to be hundreds of feet up in the air. Gravity is the primary governor of how matter moves. Matter will always move toward other massive objects, so if you observe a case in which objects are not moving toward a massive object (like our sycamore leaves), you must search for a more powerful countering force that still operates within the confines of gravity. In this case, the explanation required for the phenomenon that appears to have bested gravity requires invoking the capillary action of xylem cells and water potential gradients from the soil to the leaves. Gravity is still operating on those leaves, but for the moment, the other forces are stronger, so the leaves remain in the air.
Cultural evolution is not impossible, and if we observe behavior that cannot be explained at the level of the gene (just as we couldn’t explain how the sycamore was up in the sky by looking to gravity alone), we should seek levels of evolution that emerge out of the gene level. However, we must always keep in mind that, just as gravity is always acting on matter, gene-based evolution is always acting on our bodies, behaviors, and societies.
Early-on, the authors say that there exists evidence for “strong reciprocity” among humans, which is the rewarding of “cooperative, norm-abiding behaviors” and punishing of norm-violating behavior, even when the rewarder/punisher gains no benefit whatsoever from rewarding/punishing. The problem with this is that rewarding/punishing must impose some cost on the rewarder/punisher, and if they truly gain nothing, then the genes that encode for that behavior decrease the fitness of any individual in whom they occur. Thus, if there is a group of people, all of whom have the rewarder/punisher genes, a mutation that inactivates those genes would produce an individual that is fitter than the rest of the group*, which would cause the inactive genes to increase in frequency, thus eliminating strong reciprocity. I don’t mean to state categorically that strong reciprocity is impossible, and indeed if it is observed, we ought to search for a mechanism for its evolutionary stability. However, I hope I have illustrated how difficult it is for such a trait to become evolutionarily stable and thus the challenge that any evolutionary biologist faces in trying to explain its occurrence.
* Upon rereading this, it occurs to me that this is not necessarily true. If punishments were also dolled out to those who don’t altruistically punish (or altruistic rewards to those who do), then an individual who received the non-rewarder/punisher gene would be punished by their group and could thus be made less fit.
** And lo-and-behold, this is precisely the conclusion the article reaches, nicely summarized the following figure. The red line is the possibility that only occurred to me upon rereading what I had originally written.

Evidence for altruistic punishing/rewarding
The ultimatum game provides a simple example of self-harming behavior to enforce a social norm. An offerer is given some amount of money, of which they offer some fraction to a responder. If the responder accepts the offer, they both keep their fractions; if the responder rejects the offer, they both keep nothing. Apparently, proposals offering less than 25% are very likely to be rejected across cultures and across monetary stakes. I wonder, though, how high those monetary stakes get. If I were the responder and were offered $2.50 of $10, I’d be very likely to forgo the $2.50 to teach the offerer a lesson. That teaching opportunity would be worth $2.50 to me. However, if the offer were $250 of an offerer’s $1,000, I would almost certainly accept the offer because $250, well, I suppose because I believe that $250 would significantly increase my fitness. So I suspect that the decision of whether to punish or not is the result of an analysis of the detriment to one’s own fitness imposed by punishment, the effectiveness of the punishment, and other factors.
The ultimatum game provides an opportunity for the person who is harmed (the responder) to punish the person who harmed them. However, in the real world, social norms are often enforced by a third-party, one who isn’t directly harmed. This sort of scenario has been simulated in a game with three players: an allocator, a recipient, and a third-party. The allocator is given 100 monetary units (MU) and is allowed to give any fraction of the 100 MU to the recipient. The third-party is given 50 MU, from which they can spend any number to punish the allocator. Every MU spent by the third-party as punishment results in a 3 MU penalty to the allocator. Since the third-party gains nothing by punishing, the economically-rational, purely self-interested punisher (i.e. Homo economicus) would never punish and would simply keep all 50 MU. However, the third-party often does punish allocators who offer less than half their MU.
Interestingly, while 55% of third-parties punish unfair offers, 70-80% of recipients expect unfair offers to be punished. So perhaps we expect more fairness than we are willing bring about.
In an altruistic rewarding game, a player can give money to another, then the players are shuffled and there is another round of giving, and the shuffling and giving are repeated. A recent experiment enacted such an experiment with half the players being able to develop a reputation (for giving or not) and the other half was anonymized so that they couldn’t develop a reputation. Those who were reputation-enabled helped on average in 74% of exchanges, while those who were reputation disabled helped in 37% of cases. This suggests that the possibility of developing and benefiting from a reputation of generosity may drive some altruistic behavior, but some baseline generosity (37%) exists beyond that.
Here’s an interesting tad-bit from a neurobiology study. Two groups played prisoners’ dilemma games, one group with another human; the other with a computer program. When the human-interacting group achieved a result of mutual cooperation, their neural reward circuitry was activated relative to the computer-interacting group achieving the same result.

No comments:

Post a Comment