Tuesday, May 08, 2012

Utilitarianism, contractualism, and self-sacrifice

[This post is a response to cousin_it's request for counterarguments to utilitarianism.]

Saving the drowning child is not a self-sacrifice
Suppose that you live in a developed country, and earn a high income even by developed country standards. One day you are walking home from a business meeting in your best suit, worth some $1,000, and see a small child drowning in a muddy (and foul-smelling) pond off the road. No one else is around, but you could save the child, at the cost of hopelessly ruining the suit. Most people intuit that in such a case one should save the child, despite the cost of a $1,000 suit.

The utilitarian philosophers Peter Singer has often argued that since we should make a financial sacrifice for the drowning child, we should do the same to save the lives of children in poor countries, e.g. by donating to the most cost-effective public health charities identified by GiveWell. Others would generalize to saving future generations, saying that if we can reduce existential risk and avert astronomical waste to in expectation save trillions of happy lives, then that is even better than saving one life today.

However, the drowning child case is problematic as a justification for self-sacrifice in other contexts: it probably doesn't involve any self-sacrifice at all, but rather an expected selfish gain.



Consider the economic value of a child in a rich country: parents would spend hundreds of thousands of dollars to save a child's life and the child itself will likely earn millions of dollars over its lifetime. Grateful parents or the grown-up child are quite likely to provide reciprocation worth more than $1,000 over a lifetime. This might be immediate gifts, a willingness to provide financial help or housing in the event of personal trouble, assistance with finding a job or mate, and more. So long as the beneficiaries return even 1% of the gains they receive to the do-gooder, the act will be mutually beneficial in expectation.

Aside from reciprocity, saving the child provides an exciting story to tell later and impress friends and potential mates with generosity or heroism. Panhandlers selectively target couples on dates because signaling to a potential partner drives up contributions. In psychology various experiments provide support for common sense, although psychology is not a very reliable discipline as sciences go. These benefits would decline at the margin as more children were saved, but could easily exceed $1,000 for the first.

To the extent that the intuition one should save the drowning child reflects these selfish benefits, then it doesn't provide a starting point of morally obligatory self-sacrifice.

Insurance, collective action, and contractarianism
If the drowning child case doesn't really involve self-sacrifice, then perhaps that helps explain why we feel it is obligatory, and we might just reject the idea that there can be moral obligations that are demanding. if one is obligated to give a significant portion of one's wealth or income whenever this would save hundreds or thousands of lives, then acting morally would mean not having substantially less money to spend on luxuries and investments for oneself and family and friends. Therefore, it's not morally required to save those lives. QED...?

However,  common sense morality does often  demand that people take an action that will make them poorer or otherwise worse than alternatives, on standard construals, as Richard Chappell describes:

Perhaps the most obvious counterexample would be if we were sometimes required to sacrifice our life itself. And it does seem clear that this can be required: I may not steal the cure to a deadly disease from a chemist who is about to mass-produce it to save millions, even if I'm sure to die without immediate treatment.
Less dramatically, we may also be required to sacrifice our long-term liberty and/or living standards: A convicted felon, sentenced to life in prison, may not murder his guards for the sake of escaping to freedom. A wealthy slave-owner, whose wealth and way of life depends on the ongoing exploitation of his slaves, is obliged to set his slaves free even if this would spell the end of his life of leisure.
If morality can demand all this, is it really so incredible to think that it might also require the wealthy to sacrifice their lifestyle to free those who are enslaved, not by people, but by poverty? Indeed, would it not be more incredible to think that it would demand one but not the other?

However, there is an interesting commonality between many of these examples. While there may be sacrifices required of specific people at specific times, on average the overwhelming majority of people in a society expect more benefit than harm from the rule. Ex ante, my own child might have drowned instead. I am far more likely to be one of the millions dependent on a mass-produced cure than the one person in a position to steal it. I am far more likely to pay taxes to hire prison guards than to be in a position where I can only escape from prison by murder.

So initially endorsing such moral obligations often isn't a sacrifice at all, but rather a benefit on average. Such moral principles can act like fire insurance, which transfers money from policyholders whose homes do not burn down to those who do suffer fires, but is willingly sought out by policyholders for its expected benefit. Traditional charitable support for victims of disease, widows, and orphans can be understood as cutting out the intermediary. Likewise, the most popular government redistributive policies are often described as "social insurance." Programs that seem to transfer resources to those who predictably will not be able to reciprocate tend to be much less popular.

The requirement that people make sacrifices to comply with the overall scheme if they wind up having to pay more than they receive ex post can then be understood in terms of keeping promises and reliability. A reputation for tit-for-tat reciprocity is valuable, interactions are iterated, and perfect deception is hard, so strong emotions around reciprocity and deal-making can be very selfishly valuable. Some contractarian philosophers, such as David Gauthier, try to derive almost all of morality from this starting point. Derek Parfit's Hitchhiker case and Timeless Decision Theory speak to the same issue.

Further, since at any given time most people expect to benefit from the collective insurance arrangements on average in the future, they can collectively administer praise and blame (and coercive force) on their behalf. This "cheap talk" will shape the moral discourse more than the complaints of the man tempted to steal the disease cure at the expense of millions.

How far ex ante?
The more individuals know about their expected needs, the less voluntary insurance will redistribute between them: if everyone knew exactly whose houses would catch fire, then only those people would purchase insurance, which would cost as much as the repairs, while the majority would enjoy higher wealth without redistribution to those with burned houses.

A contractarian perspective, in which sacrifices are demanded as part of an ex ante beneficial arrangement, becomes more redistributive the less knowledge is allowed to enter the construction of the arrangement. Rawls uses the device of an "original position" in which individuals exist without knowledge of who they will be in society to argue for redistribution to the worse off, since in the original position people would not know whether they would be rich or poor. While Rawls claimed that parties in such a situation would maximize the welfare of the worst-off, even if this made the average much worse, i.e. that they would be infinitely risk-averse for no clear reason, others have taken this device to imply a form of average utilitarianism: if agents without knowledge of their identities increase average utility they will increase their expected utility by a like amount.

One could go further and remove even the knowledge that one exists/will exist, allowing for possible people to count in the calculation. A world with a trillion people instead of a billion people would provide 1,000 times as many chances to exist, from the perspective of a mind deliberating in this super-original position.

Even further, one could remove knowledge of distinctive agent-neutral values, like a love of pleasure over pain, or artistic beauty over ugliness, and try to optimize for some distribution over those values along the lines of the account in Evan Williams' dissertation.

One can get off the train at many points, with different practical implications for particular cases.

Intuitions and cases
So it is plausible that much of our endorsement of self-sacrifice in standard rescue and self-sacrifice cases reflects contractualist processes and understandings. Seen in that light, it is easier to grasp the less utilitarian side in various disputes about the right thing that often come up among those interested in effective do-gooding.

Bone marrow donation vs malaria relief
In this Less Wrong thread a poster promoted bone marrow donation for US citizens as a highly efficient means of life-saving. Some basic calculations suggested a chance of less than 1 in 5000 that registering would lead to a life saved for a time, and there were offsetting costs of mailing, providing samples, fruitless medical testing, etc. So it was pretty clear that in the short to medium run (not paying attention to effects on existential risk or galactic colonization and so forth) GiveWell's top charities offered many times the expected lives saved per dollar.

On the other hand, the expected benefit to Americans would be at least hundreds of dollars. So for American potential donors a world with a robust practice of registering for potential donation provides benefits much greater than the costs of registering. So on contractualist grounds, or Kantian ones, or via timeless decision theory the bone marrow donation could make sense for its selfish benefits. Such a person, knowing that they were not born in a poor country, might not feel the same pull towards malaria relief unless also willing to go ex ante to Rawls' original position.

Human assistance vs animal welfare
David Gauthier noted that in his contractualist account the rights of helpless animals, incapable of cutting deals with humans, he could only derive indirectly from the preferences of powerful agents concerned with their welfare. When asking the question "how would I, a moral agent, do if all moral agents adopted this principle?" the principle "advance the interests of moral agents without separate regard for other creatures" would do well. Some may find the question "what if I had been a minnow" ill-posed.

In contrast, a hedonistic utilitarian could simply (!?!) weight different brains with some procedure and count pleasures and pains even in creatures incapable of committing to moral principles or bargaining with humanity.

Hedonium and utility monsters
Robert Nozick described a "utility monster" a creature with increasing marginal welfare in resources, such that welfare would be maximized by giving all available resources to the monster, dooming humanity to be fed to its maw, or perhaps worked as brutally as was efficient to best feed it. In the long term humans, and even humanlike minds probably do not generate pleasure or welfare as efficiently as possible, so hedonium may provide a surrogate utility monster. On a utilitarian calculus converting the entire galaxy into hedonium, including the bodies and resources of all human-descended minds, would be better than leaving a portion for the humans to enjoy the fruits of their labors for a few hundred billion years. But a bare program optimized for pleasure at the expense of senses, memory, personality, and any semblance of ability to interact with the world at large or as a moral agent would play far less direct role in a contractualist account, conditioned on the knowledge that one was a person.


Global catastrophic risks vs existential risks
Serious existential risk subject to human control is largely a very recent phenomenon born of the Industrial revolution and modern technology, with the worst likely still to come. So a past policy of indifference to such risk would not likely have already destroyed humanity. But going forward we might focus effort at the margin on risks that would kill almost all current people but not permanently ruin Earth-derived civilization, or on risks that would kill everyone or permanently screw up our future. From a contractualist point of view the temporary collapse of civilization can be almost as bad as extinction, since it kills the currently cooperating parties.

How important is human influence/scope vs a human niche in a posthuman future?
A future mostly dominated by unconscious creatures, creatures far less happy than hedonium per unit of resources, or in which most resources are wasted in fruitless competition, will do terribly on utilitarian measures compared to a world that better reflects humane values (including those of the utilitarian-leaning). But from a contractualist perspective, combined with diminishing marginal utility of lifespan, such scenarios might be OK, so long as some small share of the total resources was allocated to current humans to enjoy billion year lifespans full of reward, and eudaimonia.

Use monetary/economic cost-benefit analysis or $ per QALY?
One of the major reasons why economists like to report cost-benefit analyses that value life using willingness to pay, which values rich country lives at many times poor country lives, is that this loosely tracks the ability to construct mutually beneficial deals: the greater the dollar-equivalent spoils, the easier it is to redistribute some gains back to those who might otherwise lose. This is similar to the reasoning in the drowning child example above, where saving the lives of a child with richer parents made it more likely that the do-gooder would on net benefit. This correlation is imperfect, as sometimes monetary cost-benefit analysis will endorse policies when redistribution is not feasible in practice, but does seem to provide much of the intuitive support for the criterion. Policies of this type will often match costs and benefits to the same electorate/recipient base.

Disclaimers
There are many other factors that go into our differential reaction to cases like the drowning child and GiveWell charities. Clear images and other features of close situations are better at activating empathy, making the connection between action and outcome intuitively real. Considerations of cost and mutual benefit can produce principles that are themselves deontological or universalist rather than explicitly contractualist. The pressures that produce contractualist tending patterns of moral sentiment may be indirect. For instance, Joshua Greene uses the example of Mothers Against Drunk Driving: the more teens are killed in car accidents, the more bereaved family members there will be to argue that drunk driving is immoral. This effect will tend to roughly align moral sentiments about such practices with damage done, but not by making people thoroughgoing consequentialists.

There are many other arguments in favor and against utilitarian ethical stances and concrete recommendations that I am ignoring in this post. And I remain personally sympathetic to the utilitarian take in many domains, including ones listed above.

1 comment:

Peli Grietzer said...
This comment has been removed by the author.