Posts

Moral strategies at different capability levels

Let’s consider three ways you can be altruistic towards another agent: You care about their welfare: some metric of how good their life is (as defined by you). I’ll call this care-morality - it endorses things like promoting their happiness, reducing their suffering, and hedonic utilitarian behavior (if you care about many agents). You care about their agency: their ability to achieve their goals (as defined by them). I’ll call this cooperation-morality - it endorses things like honesty, fairness, deontological behavior towards others, and some virtues (like honor). You care about obedience to them. I’ll call this deference-morality - it endorses things like loyalty, humility, and respect for authority. I think a lot of unresolved tensions in ethics comes from seeing these types of morality as in opposition to each other, when they’re actually complementary: Care-morality mainly makes sense as an attitude towards agents who are much less capable than you, and/or can't make decision

Which values are stable under ontology shifts?

Here's a rough argument which I've been thinking about lately: We have coherence theorems which say that, if you’re not acting like you’re maximizing expected utility over outcomes, you’d make payments which predictably lose you money. But in general I don't see any principled distinction between “predictably losing money” (which we view as incoherent) and “predictably spending money” (to fulfill your values): it depends on the space of outcomes over which you define utilities, which seems pretty arbitrary. You could interpret an agent being money-pumped as a type of incoherence, or as an indication that it enjoys betting and is willing to pay to do so; similarly you could interpret an agent passing up a “sure thing” bet as incoherence, or just a preference for not betting which it’s willing to forgo money to satisfy. Many humans have one of these preferences! Now, these preferences are somewhat odd ones, because you can think of every action under uncertainty as a type of

Making decisions using multiple worldviews

Tl;dr: the problem of how to make decisions using multiple (potentially incompatible) worldviews (which I'll call the problem of meta-rationality) comes up in a range of contexts, such as epistemic deference. Applying a policy-oriented approach to meta-rationality, and evaluating worldviews by the quality of their advice, dissolves several undesirable consequences of the standard "epistemic" approach to deference. Meta-rationality as the limiting case of separate worldviews When thinking about the world, we’d ideally like to be able to integrate all our beliefs into a single coherent worldview, with clearly-demarcated uncertainties, and use that to make decisions. Unfortunately, in complex domains, this can be very difficult. Updating our beliefs about the world often looks less like filling in blank parts of our map, and more like finding a new worldview which reframes many of the things we previously believed. Uncertainty often looks less like a probability distribution

Science-informed normativity

The debate over moral realism is often framed in terms of a binary question: are there ever objective facts about what’s moral to do in a given situation? The broader question of normative realism is also framed in a similar way: are there ever objective facts about what’s rational to do in a given situation? But I think we can understand these topics better by reframing them in terms of the question: how much do normative beliefs converge or diverge as ontologies improve? In other words: let’s stop thinking about whether we can derive normativity from nothing, and start thinking about how much normativity we can derive from how little, given that we continue to improve our understanding of the world. The core intuition behind this approach is that, even if a better understand of science and mathematics can’t directly tell us what we should value, it can heavily influence how our values develop over time. Values under ontology improvements By “ontology” I mean the set of concepts whic

Three intuitions about effective altruism: responsibility, scale, self-improvement

This is a post about three intuitions for how to think about the effective altruism community. Part 1: responsibility The first intuition is that, in a global sense, there are no “adults in the room”. Before covid I harboured a hope that, despite the incessant political squabbling we see worldwide, in the face of a major crisis with global implications, there were serious people who would come out of the woodwork to ensure that it went well. There weren’t. And that’s not just a national phenomenon, that’s a global phenomenon. Even countries like New Zealand, which handled covid incredibly well, weren’t taking  responsibility  in the global way I’m thinking about - they looked after their own citizens, but didn’t try to speed up vaccine distribution overall (e.g. by allowing human challenge trials), or fix everyone else’s misunderstandings. Others developed the same “no adults in the room” intuition by observing failures on different issues. For some,  AI risk ; for others, climate chan

Book review: Very Important People

Image
New York’s nightclubs are the particle accelerators of sociology: reliably creating the precise conditions under which exotic extremes of status-seeking behaviour can be observed. Ashley Mears documents it all in her excellent book Very Important People: Status and Beauty in the Global Party Circuit. A model turned sociology professor, while researching the book she spent hundreds of nights in New York’s most exclusive nightclubs, as well as similar parties across the world. The book abounds with fascinating details; in this post I summarise it and highlight a few aspects which I found most interesting. Here’s the core dynamic. There are some activities which are often fun: dancing, drinking, socialising. But they become much more fun when they’re associated with feelings of high status. So wealthy men want to use their money to buy the feeling of having high-status fun, by doing those activities while associated with (and ideally while popular amongst) other high-status people, parti

Beyond micromarriages

tl;dr micromarriages aren't fully analogous to micromorts, which makes it tricky to define them satisfactorily. I introduce an alternative unit: QAWYs (Quality-Adjusted Wife Years), where 1 QAWY is an additional year of happy marriage. I once compiled a list of concepts which I’d discovered were much less well-defined than I originally thought. I’m sad to say that I now have to add Chris Olah’s micromarriages to the list. In his words: “Micromarriages are essentially micromorts , but for marriage instead of death. A micromarriage is a one in a million chance that an action will lead to you getting married, relative to your default policy.” It’s a fun idea, and helpful in making small probabilities feel more compelling. But upon thinking about it more, I’ve realised that the analogy doesn’t quite work. The key difference is that micromorts are a measure of acute risk - i.e. immediate death. For activities like skydiving, this is the main thing to worry about, so it’s a pretty goo