Utilitarianism and its discontents

Ethics is complicated. Humans have evolved to feel a hodge-podge of different moral intuitions, each individually useful in facilitating coordination and cooperation in our ancestral environment. This means that the formulation of any ethical framework inherently involves a series of compromises between the relative importance of each of our intuitions, our desire for broad principles which can be consistently applied, and whatever value we place on those principles being widely shared by other people. I highly doubt that any ethical framework exists which doesn't violate at least one cherished moral intuition. Given this, our goal should be to find a compromise which uses consistent principles to align our actions with as many of our most important intuitions as we can. The first half of this essay attempts to reconcile several different ethical traditions, while maintaining that we should approach the most important decisions from a broadly utilitarian perspective; the second consists of a taxonomy of objections to utilitarianism, and analysis of the extent to which each succeeds. This essay’s primary purpose is not persuasive but rather explicatory; I’m happy to defend each of the claims I make, but don’t have time to fully justify each of them here.

I find it convenient to divide moral intuitions into three groups. The first consists of social intuitions - those governing how we should engage with other people in relationships and day-to-day interactions. These intuitions are broadly based around being virtuous; they promote traits such as honesty, reciprocity, kindness, respect and so on. The second consists of societal intuitions, giving us deontological rules which facilitate interactions within society in general. These include prohibitions against theft, murder and most other illegal things, as well as some strong norms like the one against incest (this category also includes supererogatory commitments to doing good, but they are less of a priority). The third consists of what I'll call policy intuitions: those that apply to large-scale situations with particularly important stakes, which are outside the usual scope of societal rules. In these cases people tend towards consequentialist reasoning. The history of moral philosophy has largely consisted of these three types of intuitions being pitted against each other, for example using trolley problems. However, doing so largely misses the point: that each of these intuitions has a role to play within a limited domain. Hopefully from the explanation above it's fairly clear what these domains are, but since this point is so important, let me be a little more explicit. Social intuitions should govern low-stakes situations which are primarily about interpersonal relationships: engaging with other people as individuals. Societal intuitions should govern low- or high-stakes situations where we engage with the rest of society, and there are clear rules or norms to uphold. Policy intuitions should govern high-stakes situations where norms or rules fail to apply - including reasoning about which norms and rules to promote.

Using deontological or consequentialist principles to govern social interactions would be a disaster. Firstly, because our emotional responses to things which happen in our own lives are generally very strong, and it's difficult to abstract away from our natural instincts in order to follow other rules. Secondly, because figuring out the appropriate deontological/consequentialist response to every social nuance would be borderline impossible. But most importantly, because motives matter in friendships and relationships, and people are often quite transparent. If my partner made me breakfast in bed because of the benefits to society in general, not because they loved me, it wouldn't be nearly as nice. Note that while virtues are the main drivers of pro-social behaviour, we still modulate them based on deontological/consequentialist reasoning. For example, we despise certain types of discrimination and therefore promote the virtue of tolerance; we want the poor to be better off and therefore promote the virtue of generosity.

Using virtue ethics or consequentialism to run society would fail just as terribly. If people thought they could do anything as long as their intentions or outcomes were good, society would be a messy, unpredictable place splintered into factions each convinced of their own righteousness, while violating others' basic rights (alternatively, there could be de facto dictators entrusted with the power to decide the moral worth of each action - still not an attractive prospect). Having clear-cut rules is the best way to avoid this chaos.

Lastly, of course, using virtue ethics or deontology to decide policy issues would result in massive amounts of unnecessary suffering or lost potential human flourishing - so we're left with consequentialism as the overarching framework under which to decide the direction of our countries and the world. Note that the way I just used consequentialist logic to justify why we shouldn't overrule our non-consequentialist intuitions, as long as they're exercised within their proper domains, is an example of this. However, it may be psychologically difficult or impossible for consequentialists to act according to virtue ethics while not believing in it. So if we're good consequentialists, we may end up cultivating what Parfit calls a "disposition" to believe that virtues matter when it comes to social interactions. Similar reasoning applies to deontological rules: without people being inclined to obey the rules for their own sake, wider confidence in them will be undermined.


Of course, these dispositions should still be subject to consequentialist overrides in cases with high stakes. But if confronted with something like a trolley problem, I think it’s reasonable to say "Murdering one to save many leads to the best outcome in this case, by assumption. Overall, we should strive towards the best outcomes. However, the best way for me to contribute towards good outcomes is to subscribe to rules which include being very strongly anti-murder in all standard societal contexts, and so I would not commit this crime." If told that there would be absolutely no effect on the rest of society, then you could still respond "Unfortunately, human brains don't work like that; I cannot simply switch off my determination not to murder, even though I acknowledge it would lead to a better outcome in this case." It turns out that most people aren't quite this rigid when asked such questions, but I'd bet that many who say they would murder one to save many would find that they couldn't overcome their aversion to such an act if called upon to do so in practice - the disposition I described above having been ingrained into them by nature. The justification for this disposition is broadly based on what Ord calls "global consequentialism": the position that for any category of x's, the best x is simply whichever one leads to the best consequences. In this case the best disposition may be one which prevents us from carrying out the best action in some cases, but which makes up for it by leading to better outcomes in others.

The most important question now becomes one of axiology: exactly which consequences should we value and aim towards? It's not surprising, given the "bag-of-intuitions" view of ethics that I outlined above, that there are contradictory ideas here as well. For the sake of brevity I will ignore a number of proposals which place value on objective goods such as aesthetic beauty or scientific achievement, and focus on ones which assert that the only consequences which matter are effects on the lives of sentient beings. I will also assume that we should value these consequences linearly - that benefiting twice as many people is roughly twice as good - in which case we can reasonably group the theories we're talking about under the heading of 'utilitarianism'. In the rest of this essay I will discuss a few different versions of utilitarianism, as well as a number of objections to each. Where possible I will phrase the objections in a way which invites evaluation of the state of the world as a whole, rather than simply evoking social- or societal-level intuitions which usually clash with policy intuitions. I'll also try to avoid arguments which rely on the concepts of "good person" or "bad person"; these are inextricably linked to social intuitions and therefore also don't mesh well with utilitarianism.

The most basic goals of traditional "hedonic utilitarianism" (H) are promoting happiness and avoiding suffering in conscious beings. Here we encounter the first objection to be discussed:
1a. "Happiness" and "suffering" are not well-defined states, and cannot be calculated precisely.

While conceding this, we note that utilitarian reasoning never aims to be particularly precise; we cannot predict the consequences of any (large-scale) decision without a significant amount of uncertainty. Adding another source of imprecision is therefore not so bad, since realistically utilitarianism prioritises actions where the benefit is so large that any reasonable definitions of happiness and suffering would give the same answer (e.g. alleviation of extreme poverty; colonisation of the galaxy). However:


1b. If the pursuits which make people most happy are "lower pleasures" like alcohol, gluttony and laziness, then H tells us to promote them instead of the finer things in life.

Mill's response to this objection was to claim that we should really be optimising for intellectual "higher pleasures" such as the enjoyment of poetry and classical music. However, enshrining upper-class Victorian standards makes our axiology far less objective. Can we do without this distinction? One approach is simply to bite the bullet and agree that there's nothing wrong with lower pleasures in principle. In practice, alcohol and drugs may shorten our lifespans and decrease societal wealth, but it's not implausible to claim that if we solved their negative side-effects then lower pleasures would be good goals. Unfortunately, the logical extension of this conclusion is challenged by:

1c. Society should, under H, systematically promote belief in falsehoods which make people happier (e.g. most religions)

This can be challenged by arguing that such beliefs inhibit the progress of scientific and technological innovations which will increase happiness much more. It is more difficult to oppose when we take it to extremes:

1d. If the technology existed, H would have us prefer that many people spent their lives in 'experience machines' which made them believe they were living very happy lives, despite the simulated world being entirely fake.
1e. In fact, ideal human happiness under H would go even further: people would be hooked up to machines which constantly stimulated their pleasure circuits (or, if we took Mill's view, their "higher pleasure" circuits). Such "wireheaders" would have no more need for even rational thought.
1f. A "utility monster" which felt happiness and suffering much more keenly than humans could account for more moral value in H than thousands or millions of humans.

Some people don't find anything wrong with experience machines, since most of the things we enjoy are good because they give us pleasure. I agree when it comes to food, sports, etc; but I also feel strongly that the main value of relationships comes from having a genuine connection to someone else. The love we feel for (by assumption, non-conscious) constructs in the experience machine would be no better than deep love for a stuffed doll, or for a partner who secretly doesn't care about you at all. In any case, even those who find nothing wrong with this possibility usually have some difficulty accepting utility monsters, and strongly object to wireheading; so we cannot accept pure hedonic utilitarianism.

The main alternative is preference (or desire-satisfaction) utilitarianism (P): the belief that we should try to fulfill as many people's preferences as possible, and minimise those unfulfilled. This is motivated by the basic intuition that we each care deeply about some aspects of the world, to the extent that many of us would sacrifice our own happiness to make them come about (see, for instance, people who work very hard to provide for their families or to ensure their legacy). And if I value some outcomes more than my own happiness, then shouldn't they be more important in evaluating the quality of my life? We avoid the issue of utility monsters by specifying an upper limit to how much we should value any one person's preferences, no matter how strong their emotions about them. Further, forcing people into experience machines and wireheading would be immoral as long as they had strong preferences towards genuine interactions and living complete lives, which we almost all do. However, we still run into some analogous problems:

2a. Dodging the 'experience machine' argument requires P to value satisfying someone's preferences even if they will never know about it; this is absurd.

Although some find this bizarre, it seems plausible enough to me. When a parent loves their children and fervently wishes that they do well, it is not for their own peace of mind, but rather for the sake of the children themselves. In general we value many aspects of the world which we will never experience: this is why soldiers sacrifice themselves for their comrades, and why we strive for achievements which will last after our own deaths. It also seems reasonable to consider someone's live to have been less good if the goal they spent decades striving for is destroyed, without their knowing it, just before they die. If that's the case, then the fulfillment of such preferences should be considered a part of what makes our lives good.

2b. "Preferences" are a broad class of psychological states which include unconscious desires, very general life goals, addictions, etc, which cannot be measured or judged against each other.

Weighing preferences against each other is obviously tricky. The intuitive answer for me is that we should value someone's preferences roughly according to how they themselves would rank them, if they were in an ideal situation to do so. By "ideal situation" I mean that they were able to evaluate all the consequences of their preferences, weren't under mind-altering influences, and were not predictably irrational. However, it does seem incredibly difficult to compare the strength of preferences between different people, even in principle.

2c. Preferences may be based on incomplete information or falsehoods.

This is also a very difficult issue. For instance, if organised religions are false, then a very large number of very strong preferences are incoherent; further, a highly religious person, if put in the "ideal situation" above, might become such a different person that their preferences would only vaguely correspond to their previous self's preferences. If we fulfill these new preferences, then who are we actually benefiting? I see no alternative but to ignore preferences which are inextricably based on falsehoods (e.g. strong desires towards heaven and away from hell) while still salvaging the underlying motivations (desire for a better life; desire to do good for its own sake).

2d. Should we care about fulfilling the desires of the dead, or those in comas?

Again an issue where we have contradictory intuitions. On one hand, most people would be strongly inclined to keep deathbed promises; also, it seems strange to say that the moral force of someone's preference about (for instance) events on the other side of the world just vanishes at the instant they die. On the other hand, it seems even more bizarre for the most moral thing to do to be enforcing the outdated moral preferences of billions of dead people who hated homosexuality, masturbation, etc. This is a fairly standard conflict, though, between the social intutions behind respect and friendship, and consequentialist intuitions. The solution is to respect the dead in social and societal contexts, but not to base policy around what they'd want. The same goes for those in comas (assuming they will never wake up).

2e. What about people who simply don't have strong preferences, or else don't have emotions about those preferences?

It may seem cruel to weight these people less, but I don't think it's that bad. Consider: given the choice, would you save the life of either someone who really cared about and valued being alive, or else someone who was rationally suicidal - that is, they correctly believed that the rest of their life would be barely worth living, or not at all, and so had no particular preference for staying alive? Most likely the former. The question of whether preferences without emotions are possible is another interesting point which was brought to my attention quite recently. If we imagine someone who acted towards certain ends, but felt no frustration when they failed, and no happiness when they succeeded, then we may well question whether they truly have preferences, or whether those preferences should be given any moral weight.

2g. The most moral thing to do would be to instill very strong, easily-satisfied preferences wherever possible.

This is the key objection to preference utilitarianism, which parallels the 'wireheading' objection to hedonic utilitarianism. Preference utilitarianism has no good way of dealing with the possibility of changing preferences. If our theory is agnostic between preferences, then we have a moral imperative to find preferences which can be trivially satisfied - the preference to have one's books ordered alphabetically, for instance - and ensure that as many people care about these as much as possible. Even if the moral violation of forcing these on existing people would be too great, it is difficult to consistently object to instilling them in the next generation as much as we can (since we already strive to teach our children certain things). In the hypothetical case of possessing brainwashing technology, this would mean absolute control - and absolute meaninglessness. Of course, just like with wireheading, you could argue against this on the grounds that it would inhibit our ability to spread humanity and flourish on a larger scale - but it still seems unacceptable to believe that the most desirable end state would still be a form of catatonia.



I've run through these objections to some standard forms of utilitarianism to emphasise that there is no reason we should believe there is a 'best' quantity to maximise. In fact, whatever function we choose to optimise, it is likely that its global maximum will be at some extreme point. But human intuitions shun such extremes; we are built for normal conditions. So let's get back to the real world. We have pretty strong convictions that some people have better lives than others, and that improving people's lives is morally worthwhile. Let's define 'welfare' to be the thing which exists more in good lives than in bad lives. These convictions don't require an exact specification of welfare; it is enough for most purposes to believe that it has something to do with happiness, and something to do with satisfied preferences, but in a way which more closely resembles our current lives than extreme attempts to maximise either of those two quantities. Note that such a theory explains why we should care greatly about animal suffering (because there are so many animals whose welfare humans could easily increase) but still value individual humans more than individual animals (because we have preferences about our lives in a more sophisticated way than any animals, and the violation of these preferences is an additional harm on top of whatever suffering we experience).

Some typical objections to a theory which advocates maximising expected welfare (W) are as follows:
3a. W advocates significantly decreasing the welfare of some for significant increases in the welfare of others, e.g. in trolley problems.
3b. W advocates significantly decreasing the welfare of a few to give many others small gains; e.g. it would be moral under W to force gladiators to fight to the death if enough people watched and enjoyed it.

The deontological intuitions behind these objections are strong ones, but we can reason around them when we focus on high-stakes cases. Almost everyone accepts that there is a moral imperative to kill innocents if the stakes are literally millions of lives - for example, military interventions against genocides are moral despite the fact that children often become collateral damage. Then it's simply an issue of how much lower the actual ratio should be. Note that this is entirely consistent with everyone thinking that in a societal context, murder is absolutely wrong: we only want these trade-offs to be made when reasoning about policy cases. Similarly, the principle of accepting significant harms to a few in exchange for small benefits to many is why rollercoasters and extreme sports are permissible even though there is a predictably non-zero death rate. However, this doesn't necessarily mean we should endorse the very counterintuitive duty to stage gladiatorial shows: it's very difficult to imagine any world in which the harms of people becoming more callous and bloodthirsty don't outweigh the marginal gain in pleasure compared with a less horrific alternative.

3c. W ignores the act-omission distinction.

The defenses above also rely on there being little difference between acts and omissions. Again, though, this becomes more plausible in large-scale cases. There is no way to hold people responsible for every omission in their daily lives without fundamentally changing our social or societal intuitions - but when it comes to policy decisions, it's reasonable to think that choosing not to act is essentially an act itself. Using this argument, another way of thinking about the core claim of Effective Altruism is that we should apply policy intuitions, not everyday intuitions, to decide how and where to donate to charity.


3d. W is not risk-averse in the same way that humans are.

We should distinguish risk-aversion for financial outcomes, which is perfectly sensible (marginal utility per dollar decreases sharply the more money you have), from risk-aversion for your own utility (which doesn't make sense given the standard definition of utility), from risk-aversion when it comes to aggregations of many people's utilities. I don't have a strong opinion on whether the last is a reasonable preference or not. However, the disparities between different outcomes are often so large that no reasonable amount of risk aversion would substantially change our moral goals - and in fact the goal of existential risk reduction, which has so far primarily been adopted by utilitarians, is even more compelling to risk-averse utilitarians.
3e. W ignores the idea of retribution; it claims that we should be indifferent between increasing the welfare of Stalin or one of his victims, all else being equal.

The notion of retribution is also very deeply ingrained; it seems utterly abhorrent for a mass murderer to be allowed to enjoy, unrepentant, the rest of their life. The way I dissolve this intuition is by thinking about how much harm the concept of revenge has done throughout history as a justification for moral atrocities. Of course we will always require a functional justice system to prevent crime, but I think that treating punishment as a regrettable evil rather than "just desserts" is a better approach (as we see evidence for in Scandenavian penal systems). This is an insight which is common to a number of religious traditions. Note that I am not just defending our policy intuitions, but also advocating we change our societal intuitions, based on utilitarian reasoning.

3f. W treats people as "mere means to an end" and doesn't value them individually.

Lastly, there's this regrettable notion that utilitarianism doesn't value individuals in the right way. I have exactly the opposite view: that welfare utilitarianism is the only ethical position which properly values others. Virtue ethics and deontology are very self-focused: they are all about your actions, and your mindset, when you're deciding what action to take. But the victims of an atrocity don't care about your particular moral hangups; they only care that they weren't saved. It's true that we may need to sacrifice a few in order to save the many - but if we refuse to do so then it is the many who are being undervalued, despite the difference between them and the few being morally arbitrary. Indeed, in some thought experiments, all the possible victims actually want you to kill one of them. To uphold deontological laws in those situations is simply to put moral squeamishness above the lives and choices of the people actually affected.

I hope that the prospect of using W to decide policy questions now seems a reasonable one. However, there are still some standard objections to using utilitarianism in practice:

4a. We can never know what the most moral acts actually are; therefore we cannot decide what to do.

The first half of this objection is correct, the second incorrect. We cannot know the exact consequences of most of our actions, but we can have pretty good ideas, backed up by solid evidence, in the same way that we usually draw conclusions about hypotheses in psychology, economics, sociology and so on. Only extreme skeptics would dispute that close personal relationships on average increase happiness, for example. Of course, these conclusions have enough uncertainty in them that we couldn't ever know we had identified the exact best action - but this doesn't mean that we should be paralysed by indecision. Instead, we should treat "spending more time to find better options" as one of our possible actions, which is very valuable at first, but becomes less valuable the more confident we are that our current ideas are pretty good.

4b. W gives us very counterintuitive judgements about what makes a "good person".

As I discussed near the beginning of this essay, we shouldn't equate "causing good outcomes" with "being a good person", because our intuitive notions of being a good or bad person are very much based on virtue ethics, whereas consequentialism is instead based around evaluating the moral status of outcomes. It’s clear that unvirtuous people can bring about good outcomes (e.g. greedy businesspeople who provide important services), and virtuous people can bring about bad outcomes (e.g. some Catholics who promote the sincere belief that using condoms is a sin). A more extreme example would be if it were the case that the only possible worlds in which humanity wasn't wiped out by nuclear war in the 20th century were those in which World War 2 occurred and showed us the horrors of nuclear weapons. If so, then Hitler's actions led to billions of lives being saved, and therefore were consequentially incredibly good. But Hitler is an archetypal example of a “bad person”. So we need to divorce the way we evaluate people from the way we evaluate the outcomes of their actions.


Our current evaluations are mostly based on possession of virtuous traits, and work pretty well in social situations. The problem is that if you’re trying to be a ‘good person’ under this definition, you may end up making policy choices based on consequentially irrelevant factors. So if we want to bring about better outcomes, then we need an alternative conception of what it means to be a good person which is close enough to virtue ethics to be intuitively acceptable, but also better than virtue ethics at guiding people towards good choices in policy situations. I propose the following: you are a better person, under consequentialism, the more highly you value the welfare of other people in general (not just your own family and friends) compared with the value you place on your own, and the more you try to improve their lives. Note that "more" includes both putting in a greater percentage of your effort and resources, and focusing those on whatever you identify as the more important priorities. I am not saying that people who do this are the best people by current standards of goodness. Rather, I am saying that it would be instrumentally useful to adopt this definition as something for people to aim towards, and it is close enough to our current implicit definition that it's not unrealistically difficult for people to do so.

4c. W requires an unbounded commitment; the moral duties it gives us are potentially infinite.

This depends on what you define as a "duty". Again, this is a concept which is not particularly natural in consequentialist reasoning: in the definition of personal goodness I just gave, the more you do, the better you are. This makes sense, since everyone should strive to be a better person than they currently are. However, many people think of morality in terms of meeting their duties, with those who do so being "good people". Therefore it is instrumentally useful for consequentialists to set some threshold of personal goodness, and say that when you meet that threshold you have fulfilled your moral duty. This threshold should be low enough to be realistic, but high enough that people doing it makes a significant difference. Spending at least 10% of your efforts and resources to benefit others meets those criteria, and is a useful Schelling point, which is why Giving What We Can use it for their Giving Pledge. Although it's always better to do more, I think of that as the point at which it's reasonable or understandable to stop. Since the global poor would get orders of magnitude more benefit out of a given amount of money than most of us would, this still corresponds to each of us valuing our own welfare perhaps a thousand times more highly than the welfare of a stranger, which is unfortunately still pretty high, but the best goal we can realistically set right now.

4d. W is self-defeating because in practice it may be best if most people don't believe in it.

For a thorough refutation of this point, I recommend Part I of Parfit's Reasons and Persons. His core argument is that consequentialist theories tell us what to value, not what to believe. Also, it's unlikely that the consequentially best moral beliefs would be entirely non-consequentialist - rather, it's probably best if everyone has different levels of intuitions which they apply in different situations, as I have been describing throughout this essay.

4f. W claims that not having children is roughly equivalent to murdering children, because both lead to worlds with fewer happy lives.

This is a tough bullet to bite, and any attempt to do so is unconvincing, so I won't. It's very clear to most of us that choosing not to have a child is morally fine. But it's also clear to most of us that a world in which there are billions of people having happy and fulfilling lives is much better than a world in which there are only a few thousand doing so. I think the best we can do here is again to separate our everyday and our policy intuitions, so that we endorse policies which lead to many more happy lives, without trying to enforce any actions towards this on an individual level.



Lastly, we run into problems when we try to apply W to extreme cases: 5a. W tells us that the moral weight of (potentially) trillions of people who might exist in the far future massively outweighs the moral weight of everyone alive right now, and so we should devote most of our resources to ensuring that they have good lives.
5b. Should we do so even if we know that they will have entirely different values to ours? What about the moral weight of aliens whose conscious experience barely resembles ours at all?
5c. Even if we accept the importance of the far future, we can never be very confident in how our actions will affect it. In the extreme case, this leads to the problem of Pascal's mugging. 5d. How do we deal with the Repugnant Conclusion - that under W, for any population of people living happy lives there is a better, larger population with lives that are barely worth living? 5e. What if the universe is infinite, so that nothing makes a difference to total utility? Similarly, in many-worlds quantum theory a world exists corresponding to every choice we could possibly make, so why does it matter what we choose? (Some of these worlds have less probability amplitude than others, but does that really mean we should give them less moral weight, if all their inhabitants exist in the same way we do?)

I don't have very good solutions to these problems, and this essay is already long enough. My current position is roughly as follows: there is no ethical theory which generalises to these extreme cases in a way which is fully consistent with our intuitions. (In fact, Arrhenius proved that, for a certain group of conclusions which most people find entirely unacceptable, there is no theory of population ethics which avoids all of them). However, we still need some way of reasoning about the moral value of extreme cases, because the future of humanity will probably be one of them. So we should agree that it might be worth accepting most of a theory even if it leads to some extreme conclusions which we cannot accept. In other words, we should treat ethics more like a form of science (where it's common for our best theories to contradict some observations) than a form of logic (where a single contradiction renders a framework useless). W tells us that we need to place significant moral weight on the experiences of conscious beings in the future. Even if attempts to specify this obligation precisely lead to the objections outlined above, it seems to me that denying it is even less tenable, and most people - myself included - should care about the future of humanity much more than we currently do.

Comments

Popular posts from this blog

In Search of All Souls

25 poems

Book review: Very Important People