Showing posts with label module 3. Show all posts
Showing posts with label module 3. Show all posts

Friday, September 2, 2011

Nature of Human Altruism


Some thoughts I jotted down as I was reading a fascinating evolutionary psychology and behavioral economics article on human altruism: Fehr and Fischbacher, 2003. The nature of human altruism. Nature 425, 785–791.
In the abstract, the authors say that “current gene-based evolutionary theories cannot explain important patterns of human altruism,” and that gene-culture co-evolution will be invoked as an alternative explanation.
I am excited to see the explanation they deliver. I tend to be skeptical of this sort of thing because, well, evolution operates at the level of the gene. It is simply the nature of the process. In a given environment, a gene that has properties that lead to increased copies of itself will proliferate, and a gene that has properties that cause it to become less abundant in the gene pool will decline. That is the primary level on which evolution operates; anything else is secondary. That is not to say that other levels are ineffectual, simply that they are emergent from, and constrained by, that primary level of evolution.
An analogy: suppose that instead of looking to evolution to explain how altruism came to be, you wish to look at the movement of matter to explain how a bunch of sycamore branches and leaves got to be hundreds of feet up in the air. Gravity is the primary governor of how matter moves. Matter will always move toward other massive objects, so if you observe a case in which objects are not moving toward a massive object (like our sycamore leaves), you must search for a more powerful countering force that still operates within the confines of gravity. In this case, the explanation required for the phenomenon that appears to have bested gravity requires invoking the capillary action of xylem cells and water potential gradients from the soil to the leaves. Gravity is still operating on those leaves, but for the moment, the other forces are stronger, so the leaves remain in the air.
Cultural evolution is not impossible, and if we observe behavior that cannot be explained at the level of the gene (just as we couldn’t explain how the sycamore was up in the sky by looking to gravity alone), we should seek levels of evolution that emerge out of the gene level. However, we must always keep in mind that, just as gravity is always acting on matter, gene-based evolution is always acting on our bodies, behaviors, and societies.
Early-on, the authors say that there exists evidence for “strong reciprocity” among humans, which is the rewarding of “cooperative, norm-abiding behaviors” and punishing of norm-violating behavior, even when the rewarder/punisher gains no benefit whatsoever from rewarding/punishing. The problem with this is that rewarding/punishing must impose some cost on the rewarder/punisher, and if they truly gain nothing, then the genes that encode for that behavior decrease the fitness of any individual in whom they occur. Thus, if there is a group of people, all of whom have the rewarder/punisher genes, a mutation that inactivates those genes would produce an individual that is fitter than the rest of the group*, which would cause the inactive genes to increase in frequency, thus eliminating strong reciprocity. I don’t mean to state categorically that strong reciprocity is impossible, and indeed if it is observed, we ought to search for a mechanism for its evolutionary stability. However, I hope I have illustrated how difficult it is for such a trait to become evolutionarily stable and thus the challenge that any evolutionary biologist faces in trying to explain its occurrence.
* Upon rereading this, it occurs to me that this is not necessarily true. If punishments were also dolled out to those who don’t altruistically punish (or altruistic rewards to those who do), then an individual who received the non-rewarder/punisher gene would be punished by their group and could thus be made less fit.
** And lo-and-behold, this is precisely the conclusion the article reaches, nicely summarized the following figure. The red line is the possibility that only occurred to me upon rereading what I had originally written.

Evidence for altruistic punishing/rewarding
The ultimatum game provides a simple example of self-harming behavior to enforce a social norm. An offerer is given some amount of money, of which they offer some fraction to a responder. If the responder accepts the offer, they both keep their fractions; if the responder rejects the offer, they both keep nothing. Apparently, proposals offering less than 25% are very likely to be rejected across cultures and across monetary stakes. I wonder, though, how high those monetary stakes get. If I were the responder and were offered $2.50 of $10, I’d be very likely to forgo the $2.50 to teach the offerer a lesson. That teaching opportunity would be worth $2.50 to me. However, if the offer were $250 of an offerer’s $1,000, I would almost certainly accept the offer because $250, well, I suppose because I believe that $250 would significantly increase my fitness. So I suspect that the decision of whether to punish or not is the result of an analysis of the detriment to one’s own fitness imposed by punishment, the effectiveness of the punishment, and other factors.
The ultimatum game provides an opportunity for the person who is harmed (the responder) to punish the person who harmed them. However, in the real world, social norms are often enforced by a third-party, one who isn’t directly harmed. This sort of scenario has been simulated in a game with three players: an allocator, a recipient, and a third-party. The allocator is given 100 monetary units (MU) and is allowed to give any fraction of the 100 MU to the recipient. The third-party is given 50 MU, from which they can spend any number to punish the allocator. Every MU spent by the third-party as punishment results in a 3 MU penalty to the allocator. Since the third-party gains nothing by punishing, the economically-rational, purely self-interested punisher (i.e. Homo economicus) would never punish and would simply keep all 50 MU. However, the third-party often does punish allocators who offer less than half their MU.
Interestingly, while 55% of third-parties punish unfair offers, 70-80% of recipients expect unfair offers to be punished. So perhaps we expect more fairness than we are willing bring about.
In an altruistic rewarding game, a player can give money to another, then the players are shuffled and there is another round of giving, and the shuffling and giving are repeated. A recent experiment enacted such an experiment with half the players being able to develop a reputation (for giving or not) and the other half was anonymized so that they couldn’t develop a reputation. Those who were reputation-enabled helped on average in 74% of exchanges, while those who were reputation disabled helped in 37% of cases. This suggests that the possibility of developing and benefiting from a reputation of generosity may drive some altruistic behavior, but some baseline generosity (37%) exists beyond that.
Here’s an interesting tad-bit from a neurobiology study. Two groups played prisoners’ dilemma games, one group with another human; the other with a computer program. When the human-interacting group achieved a result of mutual cooperation, their neural reward circuitry was activated relative to the computer-interacting group achieving the same result.

Wednesday, August 24, 2011

Behavioral Economics Reflections

Chapter 13 of Daly & Farley’s Ecological Economics is titled “Human behavior and economics” and is much the intersection of happiness psychology and microeconomics used to discredit the assumptions of neoclassical economics. I love behavioral economics, and I have been surprised by how little the shaky foundational assumptions of economics are questioned, less still their consequences. So I enjoyed this chapter greatly and thought I would pull a few seemingly important points out of it.

Homo economicus is a dick.

He is utterly selfish, coldly rational, and totally insatiable. I have known a few people that approach this archetype, and I have made an effort to know them less.

You should probably move to Costa Rica.

The authors plot, by country, life satisfaction against per capita income. For countries with median incomes of less than $20,000, more income correlates with greater satisfaction. However, above $20,000 per year, life satisfaction flat-lines. People in Norway (~$55,000/year) are no happier than people in Ireland (~$35,000/year). A few countries stand out as obvious outliers. The one that really caught my eye is Costa Rica, where people have the greatest average life satisfaction of any country surveyed (~8.5/10) while earning a meager $10,000/year.

My parents bought the wrong house. Maybe.

While making more money and getting more and better stuff doesn’t make us happier, being richer than those we compare ourselves to does. So my parents, abiding by the maxim to buy the smallest house in the nicest neighborhood you can afford, put us in a situation where we’d often be comparing ourselves to neighbors with more money than ourselves. For whatever discontentment that may have spurred, it also got my sister and I into one of the best public school systems in the country, and it put us in common contact with doctors and lawyers and professors, normalizing the experience of success and enculturating us to a higher-class existence than we would have otherwise known. If my digression here makes any point, it is just that measuring the benefit of any decision or behavior is complex and complicated. And that you should probably take your big American savings and move to Costa Rica.

We are intrinsically irrational.

Much has been made of this recently (see, e.g., Dan Airely's Predictably Irrational), but it is so undermining of the assumptions of neoclassical economics that it deserves to be stressed. E.g., suppose that an infectious disease is threatening to kill a town of 600 people. You can choose between quarantining the town, which will have a 1/3 chance of saving everyone in the town and a 2/3 chance of saving none, or administering a vaccine, which will save 200 of the 600 people. Which would you choose? Now, suppose that you have to choose between a course of action that will kill 400 people and a course of action that will have a 2/3 chance of killing 600 people and 1/3 chance of killing no one. Which would you choose? In the second scenario, when you're doing the killing, most people (78%) would take the gamble. In contrast, in the first scenario, when you're doing the saving, most people (72%) would take the certain saving of 200 lives. So, when saving, most people take the sure thing; when killing, most people roll the dice. But the scenarios are identical in their effects! The choices are the same: both have you choose between 400 people certainly dying and a 2/3 chance of 600 people dying, but because they are "framed" differently, no more than a quarter of us make a consistent choice.

You should volunteer more often. And then go dancing.

While making more money (above the $20k threshold discussed earlier) won’t make you happier, volunteering will. Devoting resources to others produces a lasting sense of contentment, and people who volunteer are happier, healthier, and more confident than those who do not. I read somewhere else that there is only one thing that makes people happier than volunteering… dancing.

Community and communication beget cooperation.

Suppose you and three others are each given $100. Each of you will choose individually how much of your $100 to contribute to a pool, which will then be doubled and split evenly among the four of you. In the end you'll go home with what you didn't contribute to the pool plus your 1/4 of the doubled pool. How much do you contribute? Homo economicus, of course, would contribute nothing. He’d keep his $100, and take his share of whatever the three suckers with him contributed. What a dick! Back in the real world, among university students, the average contribution to the pool is about half, with a bunch giving nothing and some giving all. So, given that the individually-rational behavior is to give nothing, while the wealth-maximizing behavior is to give everything, how can you get people to contribute more? Let them talk to each other before they make their decision. They contribute more to the doubled pool that way, even if they know that in the end, they won’t be told how much each person contributed.

My first thought upon learning this was the importance of community control of resources, among communities small enough that everyone knows each other. If you share a fishery with a couple dozen people with whom you talk regularly, you are very unlikely to deplete that fishery. If you share a fishery with a couple million people (or, heaven forbid, a few corporations), or even a couple dozen people you don’t know or talk to, there’s very little to restrain people from acting in the individually-rational, but collectively-disastrous, manner of harvesting fish as fast as possible until the fishery dries up.

Motivating behavior with money can be counter-productive

Parents who refuse to pay their children for grades have long known this to be true, and now we have science to back it up. A day care center in Israel had a problem of parents being late to pick up their children. To get the parents to be more prompt, the day care began charging parents who showed up late. Turns out, that only made the problem worse. It seems that before they were being charged for being late, the parents (most of them, anyway) felt morally obliged to pick up their children on time. When they daycare put a fee on being late, the moral obligation was lifted as it was replaced by a market transaction. Here's an even more striking example of how money messes with our moral-social behavior: When subjects are primed to think about money—e.g., by having a computer in the corner of a room with a screensaver displaying money—they are less likely to help others, less likely to ask for help themselves, less social, and when they sit down at a table with others, they place their seats further from others than those who weren’t “money-primed.”

Thursday, August 18, 2011

Admirable & Despicable Behavioral Traits

Prompt: Before doing any of the readings, make up a list of 5 personality traits that define a good or admirable person, and 5 personality traits that define a bad or despicable person. Physical traits such as strength, intelligence, athletic ability, looks, and so on are irrelevant.

Traits defining a good, admirable person:
  1. Concern for wellbeing of others, empathetic
  2. Truth-seeking
  3. Acting in accordance with principals
  4. Steadfast
  5. Humble in the face of uncertainty

Traits defining a bad, despicable person:
  1. Opportunistic
  2. Self-benefiting to the detriment of others
  3. Lack of commitment to truth
  4. Over-confident
  5. Deceptive