Thursday, 12 October 2017

Utilitarianism and its discontents

Ethics is complicated. Humans have evolved to feel a hodge-podge of different moral intuitions, each individually useful in facilitating coordination and cooperation in our ancestral environment. This means that the formulation of any ethical framework inherently involves a series of compromises between the relative importance of each of our intuitions, our desire for broad principles which can be consistently applied, and whatever value we place on those principles being widely shared by other people. I highly doubt that any ethical framework exists which doesn't violate at least one cherished moral intuition. Given this, our goal should be to find a compromise which uses consistent principles to align our actions with as many of our most important intuitions as we can. The first half of this essay attempts to reconcile several different ethical traditions, while maintaining that we should approach the most important decisions from a utilitarian perspective; the second consists of a taxonomy of objections to utilitarianism, and analysis of the extent to which each succeeds. This essay’s primary purpose is not persuasive but rather explicatory; I’m happy to defend each of the claims I make, but don’t have time to fully justify each of them here.

I find it convenient to divide moral intuitions into three groups. The first consists of social intuitions - those governing how we should engage with other people in relationships and day-to-day interactions. These intuitions are broadly based around being virtuous; they promote traits such as honesty, reciprocity, kindness, respect and so on. The second consists of societal intuitions, giving us deontological rules which facilitate interactions within society in general. These include prohibitions against theft, murder and most other illegal things, as well as some strong norms like the one against incest (this category also includes supererogatory commitments to doing good, but they are less of a priority). The third consists of what I'll call policy intuitions: those that apply to large-scale situations with particularly important stakes, which are outside the usual scope of societal rules. In these cases people tend towards consequentialist reasoning. The history of moral philosophy has largely consisted of these three types of intuitions being pitted against each other, for example using trolley problems. However, doing so largely misses the point: that each of these intuitions has a role to play within a limited domain. Hopefully from the explanation above it's fairly clear what these domains are, but since this point is so important, let me be a little more explicit. Social intuitions should govern low-stakes situations which are primarily about interpersonal relationships: engaging with other people as individuals. Societal intuitions should govern low- or high-stakes situations where we engage with the rest of society, and there are clear rules or norms to uphold. Policy intuitions should govern high-stakes situations where norms or rules fail to apply - including reasoning about which norms and rules to promote.

Using deontological or consequentialist principles to govern social interactions would be a disaster. Firstly, because our emotional responses to things which happen in our own lives are generally very strong, and it's difficult to abstract away from our natural instincts in order to follow other rules. Secondly, because figuring out the appropriate deontological/consequentialist response to every social nuance would be borderline impossible. But most importantly, because motives matter in friendships and relationships, and people are often quite transparent. If my partner made me breakfast in bed because of the benefits to society in general, not because they loved me, it wouldn't be nearly as nice. Note that while virtues are the main drivers of pro-social behaviour, we still modulate them based on deontological/consequentialist reasoning. For example, we despise certain types of discrimination and therefore promote the virtue of tolerance; we want the poor to be better off and therefore promote the virtue of generosity.

Using virtue ethics or consequentialism to run society would fail just as terribly. If people thought they could do anything as long as their intentions or outcomes were good, society would be a messy, unpredictable place splintered into factions each convinced of their own righteousness, while violating others' basic rights (alternatively, there could be de facto dictators entrusted with the power to decide the moral worth of each action - still not an attractive prospect). Having clear-cut rules is the best way to avoid this chaos.

Lastly, of course, using virtue ethics or deontology to decide policy issues would result in massive amounts of unnecessary suffering or lost potential human flourishing - so we're left with consequentialism as the overarching framework under which to decide the direction of our countries and the world. Note that the way I just used consequentialist logic to justify why we shouldn't overrule our non-consequentialist intuitions, as long as they're exercised within their proper domains, is an example of this. However, it may be psychologically difficult or impossible for consequentialists to act according to virtue ethics while not believing in it. So if we're good consequentialists, we may end up cultivating what Parfit calls a "disposition" to believe that virtues matter when it comes to social interactions. Similar reasoning applies to deontological rules: without people being inclined to obey the rules for their own sake, wider confidence in them will be undermined.


Of course, these dispositions should still be subject to consequentialist overrides in cases with high stakes. But if confronted with something like a trolley problem, I think it’s reasonable to say "Murdering one to save many leads to the best outcome in this case, by assumption. Overall, we should strive towards the best outcomes. However, the best way for me to contribute towards good outcomes is to subscribe to rules which include being very strongly anti-murder in all standard societal contexts, and so I would not commit this crime." If told that there would be absolutely no effect on the rest of society, then you could still respond "Unfortunately, human brains don't work like that; I cannot simply switch off my determination not to murder, even though I acknowledge it would lead to a better outcome in this case." It turns out that most people aren't quite this rigid when asked such questions, but I'd bet that many who say they would murder one to save many would find that they couldn't overcome their aversion to such an act if called upon to do so in practice - the disposition I described above having been ingrained into them by nature. The justification for this disposition is broadly based on what Ord calls "global consequentialism": the position that for any category of x's, the best x is simply whichever one leads to the best consequences. In this case the best disposition may be one which prevents us from carrying out the best action in some cases, but which makes up for it by leading to better outcomes in others.

The most important question now becomes one of axiology: exactly which consequences should we value and aim towards? It's not surprising, given the "bag-of-intuitions" view of ethics that I outlined above, that there are contradictory ideas here as well. For the sake of brevity I will ignore a number of proposals which place value on objective goods such as aesthetic beauty or scientific achievement, and focus on ones which assert that the only consequences which matter are effects on the lives of sentient beings. I will also assume that we should value these consequences linearly - that benefiting twice as many people is roughly twice as good - in which case we can reasonably group the theories we're talking about under the heading of 'utilitarianism'. In the rest of this essay I will discuss a few different versions of utilitarianism, as well as a number of objections to each. Where possible I will phrase the objections in a way which invites evaluation of the state of the world as a whole, rather than simply evoking social- or societal-level intuitions which usually clash with policy intuitions. I'll also try to avoid arguments which rely on the concepts of "good person" or "bad person"; these are inextricably linked to social intuitions and therefore also don't mesh well with utilitarianism.


~~~~~~~~~~~~

The most basic goals of traditional "hedonic utilitarianism" (H) are promoting happiness and avoiding suffering in conscious beings. Here we encounter the first objection to be discussed:

1a. "Happiness" and "suffering" are not well-defined states, and cannot be calculated precisely.

While conceding this, we note that utilitarian reasoning never aims to be particularly precise; we cannot predict the consequences of any (large-scale) decision without a significant amount of uncertainty. Adding another source of imprecision is therefore not so bad, since realistically utilitarianism prioritises actions where the benefit is so large that any reasonable definitions of happiness and suffering would give the same answer (e.g. alleviation of extreme poverty; colonisation of the galaxy). However:


1b. If the pursuits which make people most happy are "lower pleasures" like alcohol, gluttony and laziness, then H tells us to promote them instead of the finer things in life.

Mill's response to this objection was to claim that we should really be optimising for intellectual "higher pleasures" such as the enjoyment of poetry and classical music. However, enshrining upper-class Victorian standards makes our axiology far less objective. Can we do without this distinction? One approach is simply to bite the bullet and agree that there's nothing wrong with lower pleasures in principle. In practice, alcohol and drugs may shorten our lifespans and decrease societal wealth, but it's not implausible to claim that if we solved their negative side-effects then lower pleasures would be good goals. Unfortunately, the logical extension of this conclusion is challenged by:

1c. Society should, under H, systematically promote belief in falsehoods which make people happier (e.g. most religions)

This can be challenged by arguing that such beliefs inhibit the progress of scientific and technological innovations which will increase happiness much more. It is more difficult to oppose when we take it to extremes:

1d. If the technology existed, H would have us prefer that many people spent their lives in 'experience machines' which made them believe they were living very happy lives, despite the simulated world being entirely fake.
1e. In fact, ideal human happiness under H would go even further: people would be hooked up to machines which constantly stimulated their pleasure circuits (or, if we took Mill's view, their "higher pleasure" circuits). Such "wireheaders" would have no more need for even rational thought.
1f. A "utility monster" which felt happiness and suffering much more keenly than humans could account for more moral value in H than thousands or millions of humans.

Some people don't find anything wrong with experience machines, since most of the things we enjoy are good because they give us pleasure. I agree when it comes to food, sports, etc; but I also feel strongly that the main value of relationships comes from having a genuine connection to someone else. The love we feel for (by assumption, non-conscious) constructs in the experience machine would be no better than deep love for a stuffed doll, or for a partner who secretly doesn't care about you at all. In any case, even those who find nothing wrong with this possibility usually have some difficulty accepting utility monsters, and strongly object to wireheading; so we cannot accept pure hedonic utilitarianism.

The main alternative is preference (or desire-satisfaction) utilitarianism (P): the belief that we should try to fulfill as many people's preferences as possible, and minimise those unfulfilled. This is motivated by the basic intuition that we each care deeply about some aspects of the world, to the extent that many of us would sacrifice our own happiness to make them come about (see, for instance, people who work very hard to provide for their families or to ensure their legacy). And if I value some outcomes more than my own happiness, then shouldn't they be more important in evaluating the quality of my life? We avoid the issue of utility monsters by specifying an upper limit to how much we should value any one person's preferences, no matter how strong their emotions about them. Further, forcing people into experience machines and wireheading would be immoral as long as they had strong preferences towards genuine interactions and living complete lives, which we almost all do. However, we still run into some analogous problems:

2a. Dodging the 'experience machine' argument requires P to value satisfying someone's preferences even if they will never know about it; this is absurd.

Although some find this bizarre, it seems plausible enough to me. When a parent loves their children and fervently wishes that they do well, it is not for their own peace of mind, but rather for the sake of the children themselves. In general we value many aspects of the world which we will never experience: this is why soldiers sacrifice themselves for their comrades, and why we strive for achievements which will last after our own deaths. It also seems reasonable to consider someone's live to have been less good if the goal they spent decades striving for is destroyed, without their knowing it, just before they die. If that's the case, then the fulfillment of such preferences should be considered a part of what makes our lives good.

2b. "Preferences" are a broad class of psychological states which include unconscious desires, very general life goals, addictions, etc, which cannot be measured or judged against each other.

Weighing preferences against each other is obviously tricky. The intuitive answer for me is that we should value someone's preferences roughly according to how they themselves would rank them, if they were in an ideal situation to do so. By "ideal situation" I mean that they were able to evaluate all the consequences of their preferences, weren't under mind-altering influences, and were not predictably irrational. However, it does seem incredibly difficult to compare the strength of preferences between different people, even in principle.

2c. Preferences may be based on incomplete information or falsehoods.

This is also a very difficult issue. For instance, if organised religions are false, then a very large number of very strong preferences are incoherent; further, a highly religious person, if put in the "ideal situation" above, might become such a different person that their preferences would only vaguely correspond to their previous self's preferences. If we fulfill these new preferences, then who are we actually benefiting? I see no alternative but to ignore preferences which are inextricably based on falsehoods (e.g. strong desires towards heaven and away from hell) while still salvaging the underlying motivations (desire for a better life; desire to do good for its own sake).

2d. Should we care about fulfilling the desires of the dead, or those in comas?

Again an issue where we have contradictory intuitions. On one hand, most people would be strongly inclined to keep deathbed promises; also, it seems strange to say that the moral force of someone's preference about (for instance) events on the other side of the world just vanishes at the instant they die. On the other hand, it seems even more bizarre for the most moral thing to do to be enforcing the outdated moral preferences of billions of dead people who hated homosexuality, masturbation, etc. This is a fairly standard conflict, though, between the social intutions behind respect and friendship, and consequentialist intuitions. The solution is to respect the dead in social and societal contexts, but not to base policy around what they'd want. The same goes for those in comas (assuming they will never wake up).

2e. What about people who simply don't have strong preferences, or else don't have emotions about those preferences?

It may seem cruel to weight these people less, but I don't think it's that bad. Consider: given the choice, would you save the life of either someone who really cared about and valued being alive, or else someone who was rationally suicidal - that is, they correctly believed that the rest of their life would be barely worth living, or not at all, and so had no particular preference for staying alive? Most likely the former. The question of whether preferences without emotions are possible is another interesting point which was brought to my attention quite recently. If we imagine someone who acted towards certain ends, but felt no frustration when they failed, and no happiness when they succeeded, then we may well question whether they truly have preferences, or whether those preferences should be given any moral weight.

2g. The most moral thing to do would be to instill very strong, easily-satisfied preferences wherever possible.

This is the key objection to preference utilitarianism, which parallels the 'wireheading' objection to hedonic utilitarianism. Preference utilitarianism has no good way of dealing with the possibility of changing preferences. If our theory is agnostic between preferences, then we have a moral imperative to find preferences which can be trivially satisfied - the preference to have one's books ordered alphabetically, for instance - and ensure that as many people care about these as much as possible. Even if the moral violation of forcing these on existing people would be too great, it is difficult to consistently object to instilling them in the next generation as much as we can (since we already strive to teach our children certain things). In the hypothetical case of possessing brainwashing technology, this would mean absolute control - and absolute meaninglessness. Of course, just like with wireheading, you could argue against this on the grounds that it would inhibit our ability to spread humanity and flourish on a larger scale - but it still seems unacceptable to believe that the most desirable end state would still be a form of catatonia.



~~~~~~~~~~~~

I've run through these objections to some standard forms of utilitarianism to emphasise that there is no reason we should believe there is a 'best' quantity to maximise. In fact, whatever function we choose to optimise, it is likely that its global maximum will be at some extreme point. But human intuitions shun such extremes; we are built for normal conditions. So let's get back to the real world. We have pretty strong convictions that some people have better lives than others, and that improving people's lives is morally worthwhile. Let's define 'welfare' to be the thing which exists more in good lives than in bad lives. These convictions don't require an exact specification of welfare; it is enough for most purposes to believe that it has something to do with happiness, and something to do with satisfied preferences, but in a way which more closely resembles our current lives than extreme attempts to maximise either of those two quantities. Note that such a theory explains why we should care greatly about animal suffering (because there are so many animals whose welfare humans could easily increase) but still value individual humans more than individual animals (because we have preferences about our lives in a more sophisticated way than any animals, and the violation of these preferences is an additional harm on top of whatever suffering we experience).

Some typical objections to a theory which advocates maximising expected welfare (W) are as follows:
3a. W advocates significantly decreasing the welfare of some for significant increases in the welfare of others, e.g. in trolley problems.
3b. W advocates significantly decreasing the welfare of a few to give many others small gains; e.g. it would be moral under W to force gladiators to fight to the death if enough people watched and enjoyed it.

The deontological intuitions behind these objections are strong ones, but we can reason around them when we focus on high-stakes cases. Almost everyone accepts that there is a moral imperative to kill innocents if the stakes are literally millions of lives - for example, military interventions against genocides are moral despite the fact that children often become collateral damage. Then it's simply an issue of how much lower the actual ratio should be. Note that this is entirely consistent with everyone thinking that in a societal context, murder is absolutely wrong: we only want these trade-offs to be made when reasoning about policy cases. Similarly, the principle of accepting significant harms to a few in exchange for small benefits to many is why rollercoasters and extreme sports are permissible even though there is a predictably non-zero death rate. However, this doesn't necessarily mean we should endorse the very counterintuitive duty to stage gladiatorial shows: it's very difficult to imagine any world in which the harms of people becoming more callous and bloodthirsty don't outweigh the marginal gain in pleasure compared with a less horrific alternative.

3c. W ignores the act-omission distinction.

The defenses above also rely on there being little difference between acts and omissions. Again, though, this becomes more plausible in large-scale cases. There is no way to hold people responsible for every omission in their daily lives without fundamentally changing our social or societal intuitions - but when it comes to policy decisions, it's reasonable to think that choosing not to act is essentially an act itself. Using this argument, another way of thinking about the core claim of Effective Altruism is that we should apply policy intuitions, not everyday intuitions, to decide how and where to donate to charity.


3d. W is not risk-averse in the same way that humans are.

We should distinguish risk-aversion for financial outcomes, which is perfectly sensible (marginal utility per dollar decreases sharply the more money you have), from risk-aversion for your own utility (which doesn't make sense given the standard definition of utility), from risk-aversion when it comes to aggregations of many people's utilities. I don't have a strong opinion on whether the last is a reasonable preference or not. However, the disparities between different outcomes are often so large that no reasonable amount of risk aversion would substantially change our moral goals - and in fact the goal of existential risk reduction, which has so far primarily been adopted by utilitarians, is even more compelling to risk-averse utilitarians.
3e. W ignores the idea of retribution; it claims that we should be indifferent between increasing the welfare of Stalin or one of his victims, all else being equal.

The notion of retribution is also very deeply ingrained; it seems utterly abhorrent for a mass murderer to be allowed to enjoy, unrepentant, the rest of their life. The way I dissolve this intuition is by thinking about how much harm the concept of revenge has done throughout history as a justification for moral atrocities. Of course we will always require a functional justice system to prevent crime, but I think that treating punishment as a regrettable evil rather than "just desserts" is a better approach (as we see evidence for in Scandenavian penal systems). This is an insight which is common to a number of religious traditions. Note that I am not just defending our policy intuitions, but also advocating we change our societal intuitions, based on utilitarian reasoning.

3f. W treats people as "mere means to an end" and doesn't value them individually.

Lastly, there's this regrettable notion that utilitarianism doesn't value individuals in the right way. I have exactly the opposite view: that welfare utilitarianism is the only ethical position which properly values others. Virtue ethics and deontology are very self-focused: they are all about your actions, and your mindset, when you're deciding what action to take. But the victims of an atrocity don't care about your particular moral hangups; they only care that they weren't saved. It's true that we may need to sacrifice a few in order to save the many - but if we refuse to do so then it is the many who are being undervalued, despite the difference between them and the few being morally arbitrary. Indeed, in some thought experiments, all the possible victims actually want you to kill one of them. To uphold deontological laws in those situations is simply to put moral squeamishness above the lives and choices of the people actually affected.


~~~~~~~~~~~~

I hope that the prospect of using W to decide policy questions now seems a reasonable one. However, there are still some standard objections to using utilitarianism in practice:

4a. We can never know what the most moral acts actually are; therefore we cannot decide what to do.

The first half of this objection is correct, the second incorrect. We cannot know the exact consequences of most of our actions, but we can have pretty good ideas, backed up by solid evidence, in the same way that we usually draw conclusions about hypotheses in psychology, economics, sociology and so on. Only extreme skeptics would dispute that close personal relationships on average increase happiness, for example. Of course, these conclusions have enough uncertainty in them that we couldn't ever know we had identified the exact best action - but this doesn't mean that we should be paralysed by indecision. Instead, we should treat "spending more time to find better options" as one of our possible actions, which is very valuable at first, but becomes less valuable the more confident we are that our current ideas are pretty good.

4b. W gives us very counterintuitive judgements about what makes a "good person".

As I discussed near the beginning of this essay, we shouldn't equate "causing good outcomes" with "being a good person", because our intuitive notions of being a good or bad person are very much based on virtue ethics, whereas consequentialism is instead based around evaluating the moral status of outcomes. It’s clear that unvirtuous people can bring about good outcomes (e.g. greedy businesspeople who provide important services), and virtuous people can bring about bad outcomes (e.g. some Catholics who promote the sincere belief that using condoms is a sin). A more extreme example would be if it were the case that the only possible worlds in which humanity wasn't wiped out by nuclear war in the 20th century were those in which World War 2 occurred and showed us the horrors of nuclear weapons. If so, then Hitler's actions led to billions of lives being saved, and therefore were consequentially incredibly good. But Hitler is an archetypal example of a “bad person”. So we need to divorce the way we evaluate people from the way we evaluate the outcomes of their actions.


Our current evaluations are mostly based on possession of virtuous traits, and work pretty well in social situations. The problem is that if you’re trying to be a ‘good person’ under this definition, you may end up making policy choices based on consequentially irrelevant factors. So if we want to bring about better outcomes, then we need an alternative conception of what it means to be a good person which is close enough to virtue ethics to be intuitively acceptable, but also better than virtue ethics at guiding people towards good choices in policy situations. I propose the following: you are a better person, under consequentialism, the more highly you value the welfare of other people in general (not just your own family and friends) compared with the value you place on your own, and the more you try to improve their lives. Note that "more" includes both putting in a greater percentage of your effort and resources, and focusing those on whatever you identify as the more important priorities. I am not saying that people who do this are the best people by current standards of goodness. Rather, I am saying that it would be instrumentally useful to adopt this definition as something for people to aim towards, and it is close enough to our current implicit definition that it's not unrealistically difficult for people to do so.

4c. W requires an unbounded commitment; the moral duties it gives us are potentially infinite.

This depends on what you define as a "duty". Again, this is a concept which is not particularly natural in consequentialist reasoning: in the definition of personal goodness I just gave, the more you do, the better you are. This makes sense, since everyone should strive to be a better person than they currently are. However, many people think of morality in terms of meeting their duties, with those who do so being "good people". Therefore it is instrumentally useful for consequentialists to set some threshold of personal goodness, and say that when you meet that threshold you have fulfilled your moral duty. This threshold should be low enough to be realistic, but high enough that people doing it makes a significant difference. Spending at least 10% of your efforts and resources to benefit others meets those criteria, and is a useful Schelling point, which is why Giving What We Can use it for their Giving Pledge. Although it's always better to do more, I think of that as the point at which it's reasonable or understandable to stop. Since the global poor would get orders of magnitude more benefit out of a given amount of money than most of us would, this still corresponds to each of us valuing our own welfare perhaps a thousand times more highly than the welfare of a stranger, which is unfortunately still pretty high, but the best goal we can realistically set right now.

4d. W is self-defeating because in practice it may be best if most people don't believe in it.

For a thorough refutation of this point, I recommend Part I of Parfit's Reasons and Persons. His core argument is that consequentialist theories tell us what to value, not what to believe. Also, it's unlikely that the consequentially best moral beliefs would be entirely non-consequentialist - rather, it's probably best if everyone has different levels of intuitions which they apply in different situations, as I have been describing throughout this essay.

4f. W claims that not having children is roughly equivalent to murdering children, because both lead to worlds with fewer happy lives.

This is a tough bullet to bite, and any attempt to do so is unconvincing, so I won't. It's very clear to most of us that choosing not to have a child is morally fine. But it's also clear to most of us that a world in which there are billions of people having happy and fulfilling lives is much better than a world in which there are only a few thousand doing so. I think the best we can do here is again to separate our everyday and our policy intuitions, so that we endorse policies which lead to many more happy lives, without trying to enforce any actions towards this on an individual level.



~~~~~~~~~~~~

Lastly, we run into problems when we try to apply W to extreme cases:
5a. W tells us that the moral weight of (potentially) trillions of people who might exist in the far future massively outweighs the moral weight of everyone alive right now, and so we should devote most of our resources to ensuring that they have good lives.
5b. Should we do so even if we know that they will have entirely different values to ours? What about the moral weight of aliens whose conscious experience barely resembles ours at all?
5c. Even if we accept the importance of the far future, we can never be very confident in how our actions will affect it. In the extreme case, this leads to the problem of Pascal's mugging.
5d. How do we deal with the Repugnant Conclusion - that under W, for any population of people living happy lives there is a better, larger population with lives that are barely worth living? 5e. What if the universe is infinite, so that nothing makes a difference to total utility? Similarly, in many-worlds quantum theory a world exists corresponding to every choice we could possibly make, so why does it matter what we choose? (Some of these worlds have less probability amplitude than others, but does that really mean we should give them less moral weight, if all their inhabitants exist in the same way we do?)

I don't have very good solutions to these problems, and this essay is already long enough. My current position is roughly as follows: there is no ethical theory which generalises to these extreme cases in a way which is fully consistent with our intuitions. (In fact, Arrhenius proved that, for a certain group of conclusions which most people find entirely unacceptable, there is no theory of population ethics which avoids all of them). However, we still need some way of reasoning about the moral value of extreme cases, because the future of humanity will probably be one of them. So we should agree that it might be worth accepting most of a theory even if it leads to some extreme conclusions which we cannot accept. In other words, we should treat ethics more like a form of science (where it's common for our best theories to contradict some observations) than a form of logic (where a single contradiction renders a framework useless). W tells us that we need to place significant moral weight on the experiences of conscious beings in the future. Even if attempts to specify this obligation precisely lead to the objections outlined above, it seems to me that denying it is even less tenable, and most people - myself included - should care about the future of humanity much more than we currently do.

Monday, 2 October 2017

In Search of All Souls

I recently sat the All Souls Fellowship exam, called by some the "world's hardest exam". It requires you to write twelve essays for four papers over two days; the breadth and novelty of the questions make it a fascinating experience. Two of the papers were "general papers" and two were in a humanities subject of your choice (in my case, philosophy); most papers had around 25 questions. I've summarised my answers below, as well as noting some of the other particularly interesting questions.

General paper I:

1. How should you prepare for the end of the world?
I started off with a somewhat emotional argument about the badness of death on an individual level. I referred to Epicurus' argument that we have nothing to fear from death because when we are dead, we will not have any preferences about it at all - but argued, in response, that having preferences about future states of the world is a foundation of our lives.

In terms of the end of the world, I identified some plausible ways in which humanity might go extinct in the near future, most notably the development of synthetic diseases or general AI. Then I discussed some cognitive biases which hold us back from addressing them: they're unpleasant to think about; we often ignore small probabilities of important outcomes; and general scope insensitivity about what's at stake (consider, for instance, those who believe that a child's death is a tragedy, but the obliteration of humanity is not so bad - even though the latter would entail millions of the former!). But we shouldn't allow such biases to dictate our thinking. Indeed, under any plausible theory of moral uncertainty (such as MacAskill's), we should place significant weight on the far future even if we don't personally think it matters. The original question is not hypothetical; the best preparation for the end of the world is to take action to prevent it.

2. Debunk a modern myth.
I identified the myth that companies have obligations first and foremost to maximise shareholder value. I talked about the duties of individuals, and then how they play out within the structure of a modern firm in a way which dilutes moral responsibility. I also talked about the cases in which businesses are exempted from moral duties which we usually expect from individuals. I won't say too much about this here as I'm working on an extensive essay about this.

3. Is the only good answer one which destroys the question?
No, I answered, but the best ones are. Why? Well, language has evolved partly to reflect our cognitive biases, but it also reinforces them - in particular, the bias which says that terms and categories which seem "obvious" to us reflect a real distinction in the world. This is what led, for instance, to centuries of debate over the Ship of Theseus and whether or not it remained the "same" ship. But these intuitions hold us back from the obvious conclusion of modern science: that although we perceive some everyday distinctions as fundamental (did x cause y? Is w the same person as z?), instead - as Democritus said - "in truth there are only atoms and the void". Of course such a reductionist viewpoint doesn't mean that it is pointless to talk about which distinctions to draw: some levels of explanation are more useful than others in a human context. But we should also recognise that many such discussions are "empty" and reflect only linguistic differences. The best answers help us realise that fact so that we are able to better evaluate the concepts which we are using, and perhaps discard them altogether.

Philosophy I:

1. Should we be Bayesians?
I explained Bayesianism as conditionalisation from a prior, also noting a slight modification which needs to be made in response to Arntzenius' objections to conditionalisation and reflection (in short, we use an ur-prior, which we conditionalise anew from total evidence whenever we receive more). I then identified three possible interpretations of the question:
a) Is Bayesianism the ideal of rationality? I argued that it is because of Dutch books, which lead non-Bayesians to contradictory probability judgments, or else suspension of belief. However, since it can't account for any revisions to the laws of logic, we also need a meta-theory which takes that possibility into account.
b) Does Bayesianism describe human thinking well? No, because of the computational impossibility of implementing it. In the limiting case where not much computation is required, we might expect that evolution has designed our brains draw conclusions in an approximately Bayesian manner; but it's unlikely that this applies in many cases.
c) Should we use Bayesianism as a guide to improving our thinking? Sometimes. For example, explicitly Bayesian reasoning helps us to avoid the base rate fallacy. However, in general it's rational to reason in ways which are explicitly ruled out by ideal Bayesianism - for example, revising our beliefs when we think of a new theory or notice a new implication of old evidence.

2. Were ancient slave-holders bad people?
I claimed that there is no good way of integrating the idea of being a "bad person" with consequentialism. If we base our evaluation of people on outcomes of acts, then in the hypothetical case where knowledge of the horrors of World War 2 prevented a full-scale nuclear war, Hitler would have been a good person; also, in general unpredictable 'butterfly effects' into the far future will dominate. If we base it on subjective judgments of the utility of acts, then those who deludedly believe that they are certainly saving the world will dominate. If we base it on rationally permissible subjective judgments, then some people simply could not be very good people if they were born in a time or place that became insignificant in the long term.

In general, since our evaluations of people and our evaluations of outcomes are driven by fundamentally different mental processes, it's unlikely we'll ever get a good definition of the former in terms of the latter. So I moved on to the question of which definition would be most useful. That depends on at least three properties: whether it is intuitively acceptable enough to change people's behaviour; whether such behavioural changes will lead to very good outcomes; and whether it's consistent with other principles we value. In the case of slavery, since there are still millions of slaves across the world, then making excuses for past slave-holders will probably lessen the strength of our impulses to help them. Also, we might lose faith in the idea of moral progress, which has motivated many important progressive movements. So we should be very clear that ancient slave-holders were bad people.

(This argument relies on the idea that we can and should try to change our moral intuitions in some ways; I didn't have time to discuss the meta-ethical arguments for this).


3. Do scientific explanations raise the probabilities of the evidence they explain?
I noted that the question was weird, because we usually think that we are certain of the data we already have. I then described what I thought we should mean by subjective probabilities and scientific explanations. I argued that the quality of the latter should be evaluated based on simplicity, coherence, and also subjective human interests - because the way we decide which levels of explanation we want is based on what we want a science to achieve. For example, we can explain the fact that almost all animals have equal birth rates of each sex either in terms of its evolutionary benefit, or the mathematical model of genetics which brings it about, or (very impracticably) the underlying atomic movements which implement animal life.

I then claimed that under the Duhem-Quine thesis - that every scientific observation must be taken to assume a large number of auxiliary hypotheses - we might doubt that one of the auxiliary hypotheses is true, and therefore that our observation is correct. We would be more likely to doubt this if there is no good explanation for that observation which is consistent with all the auxiliary hypotheses; therefore, a good explanation can help raise our credence that the event we observed actually happened. If "evidence" is taken to be such observed events, that answers the question. If "evidence" is taken to be merely the fact that we made an observation at all, then it becomes a little more difficult - can we really doubt, for instance, that we are seeing the colour red? But we can certainly doubt that we saw red at a point in the past, because our memories are fallible - so in this sense too a good explanation can raise the subjective probability that we actually had the experience we think we did.

General paper II:

1. 'All lives matter.' Do they?
I noted that I would take the question entirely literally at first, but eventually move on to its political context. So, what is a life? Noting the difficulties define it in biological terms, I moved on the idea of what lives might matter, and argued for consciousness as a key criterion. I claimed that we can distinguish 3 levels on which we value human lives: intrinsic (simply because a life is human), welfare (based on how good a life is for the person leading it), and outcome (how good the effects of a life are). For simplicity's sake, I assumed that all effects could be described in terms of changes to welfare, collapsing the third level into the second. Also, while most of us have clear intuitions that even humans with very little welfare - late-stage embryos, people in comas, etc - do matter intrinsically, such value is difficult to reason about consequentially due to the virtue ethical/deontological intuitions behind it.

Zooming in on welfare, I explained the problems with hedonic, preference and objective list accounts of welfare, since each can be taken to unacceptable extremes. (I also explained how our intuitions about individual lives cannot consistently be extended to whole populations because of Arrhenius' impossibility theorems). I then noted that under any reasonable fusion of these three accounts, there are some people - those who will have and cause zero or negative welfare over the rest of their lives - about whose continued survival we should be agnostic or even opposed. This is the first sense in which a life can "matter". Secondly, when deciding where to invest resources, there are some people who we can help very much for very little, and others whom we cannot help much, or at all. In that sense, the former lives matter more. However, even lives we cannot improve still matter in the sense that if we could increase their welfare without any trade-offs, we should do it. These three definitions help explain the clash between "Black lives matter" and "All lives matter": all lives matter in the third sense, since if we could hypothetically increase their welfare, we should. But black lives matter more in the second sense, since there are ways we as a society could help them very much very easily, such as cracking down on police violence.

2. Are there any questions that should be beyond the pale of philosophical discussion?
I noted that there are two contexts for this question. The first is the entirely theoretical context; the second is the current context of ideological conflict over offense and censorship. In theory, it is clear that there are some discussions which should not be had: truth is not the only good which philosophers do or should value. Human flourishing is another, and certain truths - such as a true report to a genocidal dictator listing all the crimes committed by minorities - lead to the exact opposite.
In the current context, however, the question is whether we can identify those cases. I noted that from an outside view, people with the power of censorship have been very bad at identifying what to censor over the last few centuries. However, it's difficult to dispute that we should be very confident in censoring claims that certain ethnic groups, religious groups, women, and so on, are normatively inferior. The difficulty lies in distinguishing normative claims from descriptive claims - and I argue that in practice, social justice movements have often failed to do so. For example, the claim that sexual orientation is a choice, although mostly used by conservatives to argue against gay marriage, is only harmful to LGBT+ individuals given the further claim that intrinsic identities matter more than chosen ones. I think that there are very good reasons to consider the latter claim false.

Of course, many do think that it is true. The argument that sexuality is intrinsic helps to counter their arguments without needing to challenge their fundamental normative claims. But it also has the potential to massively backfire - for instance if scientific evidence to the contrary emerges, then gay marriage advocates will be discredited as unscientific. I claimed that another example of a potentially harmful lack of clarity has been the continued conflation of "women are, on average, less interested in mathematics than men, due to intrinsic factors" with "we shouldn't give equal treatment to the women who are interested in mathematics." Of course you cannot derive either statement from the other. But since there are a number of respected psychologists who agree with the former statement, making it taboo is a bad strategy for feminism. Instead, we should all work to clearly separate normative and descriptive claims, so that arguments for equal rights do not need to depend on scientific findings.

3. 'When everyone is somebody, then no one's anybody' (W.S. GILBERT). Discuss.
I pointed out that depression and mental health issues are incredibly prevalent among millennials. We are in an uncanny valley between developing technology that we use to replace many social interactions, and developing technology that is good enough to replace most of the valuable aspects of social interaction. This harms our friendships; meanwhile, time spent online is generally unproductive and unsatisfying. Over time we will, I think, become better at dealing with the distractions of the internet. But right now it's not only distracting, it's also individualistic, since using technology is currently a very solitary activity.

The growth of individualism is not new; in fact, there have been many contributing factors over the last century or more. We've seen declines in organised religion, massive migration to cities (and, indeed, between countries), no-fault divorce, the normalisation of frequent job-switching, smaller families, widespread acceptance of individualist economic assumptions, and loss of local social structures (see the classic book Bowling Alone). We now focus on finding the ideal career without realising that it's communities, and relationships within them, that make us happy. Today everyone is 'somebody' - an individual - but few of us are anybody in the communal sense which matters the most.

Philosophy II:

1. Is scepticism irrefutable?
I framed philosophical scepticism as a problem of underdetermination: all evidence will always beconsistent with multiple possible worlds. As long as we do not rule any of them out a priori, then we will always have multiple options to which we assign non-zero probability. But can we refute scepticism in the sense of rationally assigning it very low probability? I present some attempts to do so from Blackburn, Harrod, Stove, etc, but they all fail. Then I turn to Solomonoff Induction. While it doesn't solve the traditional problem of induction, since it relies on a number of assumptions, I argue that it represents important progress. Further, when we think of hypotheses as descriptions in a certain language, then on average shorter descriptions must always be more probable; this helps to independently motivate Occam's Razor. (I hope to write a post soon to explore this last point further).

2. Should we be effective altruists?
I felt like I wrote fairly standard things here - but then again, they are standard things which many philosophers don't accept. I said that EA was based on two normative claims and a factual claim. The factual claim: doing a great deal of good costs us very little. The normative claims: 1. if we can do significant amounts of good without significant sacrifices, then we have a duty to do so; 2. fulfilling this duty requires us to focus on the most effective charities. The basic argument intuition behind 1 is evoked most strongly in Singer's Drowning Child thought experiment; however, it also relies on the factual claim above, and the further claim that the difference between saving a drowning child near us and a starving child in a faraway country is not morally significant. I think it's difficult to make strong arguments for this latter claim, as people either find it intuitively compelling, or not; hopefully, though, as the world gets "smaller" future generations will find it more convincing. In support of 2 I argued that the duty was never phrased in terms of "donating money" but rather "doing good"; replacing a donation to an effective charity with one to an ineffective charity is thus a choice to do less good, which means that you're not fulfilling that duty as well.

I then addressed some objections. Firstly, that we might not believe the factual claim, because the net effect of donations might include, for instance, propping up corrupt governments. But it would be strange if, in a world of such inequality, there are no ways we can significantly benefit others for relatively little money - and if there are, we can find out about them in the same way that we gain other scientific and economic knowledge. Secondly, that EA is too demanding. But you don't need to accept an infinite duty, only that you have a moral obligation to help others until the point that it causes you significant sacrifices. Different people might have different ideas of what that means, but it's difficult to believe that giving up a slightly fancier car or a new TV is really such a significant sacrifice - especially given that psychological research indicates that donating that same money will probably actually make you happier! Thirdly, that far future concerns will end up dominating. But they probably should dominate, because of the huge amounts at stake; and anyway, EAs who believe that the far future doesn't matter can simply avoid contributing to those causes. Being an EA isn't defined by participating in the movement, but rather by acting on the beliefs above.

3. Does it make sense to say that one false theory is closer to the truth than another?
I claim that the idea of being "closer to the truth" is an inherently vague one which doesn't generalise well to scientific theories. An intuitive definition would include at least two criteria: closeness to observations, and closeness to fundamental composition (i.e. ontology). It's far easier to measure the former (although Kuhn disagrees, claiming that observations made in different paradigms are incomparable). When it comes to everyday phenomena, we only really care about the former - for instance, most of us agree that cats are made out of atoms, but if it turns out that the science was wrong and atoms just don't exist, we wouldn't say that all our statements about cats had been wrong this whole time. (I claim therefore that Putnam drew incorrect conclusions from his 'Twin Earth' thought experiment). On the other hand, we care about fundamental composition more in a scientific context: if atoms didn't exist then it's probably reasonable to say that we've been wrong about molecules this whole time. Van Fraassen thinks that there's a clear distinction between these two cases, based on what's "observable" and "unobservable", but the line he draws just doesn't make sense. So the best we can do is to accept some combination of the two criteria, with the understanding that it will always be vague and impossible to formalise. In fact, we can interpret 'structural realism' as such a compromise.

Other interesting questions from the general papers:
1. Why are there still Communists?
Obviously because real Communism hasn't been tried yet. /s
2. If you were a dictator, what would you ban?
It was surprisingly difficult to think of an answer to this one - perhaps because bans, as very blunt tools, are seldom the best option. Maybe smoking, but that would've been a pretty boring essay.
3. Account for the rise of internet memes.
4. What is the use of magic?
5. Can a deal be art?
6. Invent a new idiom - then spill the beans.
7. For what should we ask forgiveness?
8. Why is the price of housing so high?
Supply-side stupidity!
9. Is incompetence the most underrated force in human history?
10. 'The first thing we do, let's kill all the lawyers' (WILLIAM SHAKESPEARE). Discuss.
11. What if there were no hypothetical questions?

From the philosophy papers:
1. "I have tried to show that what matters in the continued existence of a person are, for the most part, relations of degree" (PARFIT). Discuss.
Awesome question, wish I knew enough to properly answer it. It seems difficult to deny the force of Parfit's argument that our future selves are different people in the same way that everyone else is, especially because of its thoroughly reductionist grounding. But in practice it may well be psychologically impossible to fully accept it.

2. Is the golden rule a good guide to morality?
3. What is distinctive about moral disagreement?
Two fairly standard ethics questions, although they touch on important issues.

4. Is it possible to refute those who deny the Law of Non-Contradiction?
5. Is there any more reason to doubt the existence of root(-1) than to doubt the existence of root(2)?
6. Does the Generalised Continuum Hypothesis have a truth-value?
7. Are there any infinitesimal quantities?
Some fascinating questions in philosophy of mathematics. Again, I wanted to answer them but simply haven't read up enough on different theories of what it means for numbers or logical proofs to 'exist' or 'be true'.

8. Does music have meaning?
9. What, if anything, unifies surprise, anger, sorrow, disgust, guilt, contempt, amusement and wonder as emotions?
Two thought-provoking questions touching on everyday life.

10. Is quantum non-locality consistent with the Special Theory of Relativity?
11. What is the best interpretation of probabilities in quantum mechanics?

And from the Economics papers, which I had a chance to look at, but didn't sit:
1. Does the experience of China undermine the proposition that democracy is good for economic growth?
Not at all; rather, it supports the idea that democracy is low-variance for economic growth, while autocracies can be very good for it (China in the last few decades) or very, very bad (China right before that). Since potential downside far outweighs potential upside, democracy is still superior.

2. Is there a tension between global and national equality?
Without better redistribution, yes there is: one of the best things for global equality is the outsourcing of jobs to developing countries, which obviously hits blue-collar workers hard.

3. Can a price be put on human life?
Empirically, yes: a few thousand dollars in some African countries; a few million dollars in most Western ones. But don't confuse price with value! Water, for example, is the most valuable thing to most people, but its price is very cheap.