Three intuitions about effective altruism: responsibility, scale, self-improvement

This is a post about three intuitions for how to think about the effective altruism community.

Part 1: responsibility

The first intuition is that, in a global sense, there are no “adults in the room”. Before covid I harboured a hope that, despite the incessant political squabbling we see worldwide, in the face of a major crisis with global implications, there were serious people who would come out of the woodwork to ensure that it went well. There weren’t. And that’s not just a national phenomenon, that’s a global phenomenon. Even countries like New Zealand, which handled covid incredibly well, weren’t taking responsibility in the global way I’m thinking about - they looked after their own citizens, but didn’t try to speed up vaccine distribution overall (e.g. by allowing human challenge trials), or fix everyone else’s misunderstandings.

Others developed the same “no adults in the room” intuition by observing failures on different issues. For some, AI risk; for others, climate change; for others, policies like immigration or housing reform. I don’t think covid is a bigger failure than any of these, but I think it comes much closer to creating common knowledge that the systems we have in place aren’t capable of steering through global crises. This naturally points us towards a long-term goal for the EA community: to become the adults in the room, the people who are responsible enough and capable enough to steer humanity towards good outcomes.

By this I mean something different from just “being in charge” or “having a lot of power”. There are many large power structures, containing many competent people, which try to keep the world on track in a range of ways. What those power structures don’t have is the ability to absorb novel ideas and take novel actions in response. In other words, the wider world solves large problems via OODA loops that take decades. In the case of climate change, decades of advocacy led to public awareness which led to large-scale policies, plus significant reallocation of talent. I think this will be enough to avoid catastrophic outcomes, but that’s more from luck than skill. In the case of covid, the OODA loop on substantially changing vaccine regulations was far too long to make a difference (although maybe it’ll make a difference to the next pandemic).

The rest of the world has long OODA loops because people on the inside of power structures don’t have strong incentives to fix problems; and because people on the outside can’t mobilise people, ideas and money quickly. But EA can. I don’t think there’s any other group in the world which can allocate as much talent as quickly as EA has; I don’t think there’s any other group which can identify and propagate important new ideas as quickly as EA can; and there are few groups which can mobilise as much money as flexibly.

Having said all that, I don’t think we’re currently the adults in the room, or else we would have made much more of a difference during covid. While it’s not itself a central EA concern, it’s closely related to one of our central concerns, and would have been worth addressing for reputational reasons alone. But I do think we were closer to being the adults in the room than almost any other group - particularly in terms of long-term warnings about pandemics, short-term warnings about covid in particular, and converging quickly towards accurate beliefs. We should reflect on what would have been needed for us to convert those advantages into much more concrete impact.

I want to emphasise, though, that being the adults in the room doesn’t require each individual to take on a feeling of responsibility towards the world. Perhaps a better way to think about it: every individual EA should take responsibility for the EA community functioning well, and the EA community should take responsibility for the world functioning well. (I’ve written a little about the first part of that claim in point four of this post.)

Part 2: scale, not marginalism

Historically, EA has thought primarily about the marginalist question of how to do the most good per unit of resources. An alternative, which is particularly natural in light of part 1, is to simply ask: how can we do the most good overall? In some sense these are tautologically equivalent, given finite resources. But a marginalist mindset makes it harder to be very ambitious - it cuts against thinking at scale. For the most exciting projects, the question is not “how effectively are we using our resources”, but rather “can we make it work at all?” - where if it does work it’ll be a huge return on any realistic amount of investment we might muster. This is basically the startup investor mindset; and the mindset that focuses on megaprojects.

Marginalism has historically focused on evaluating possible projects to find the best one. Being scale-focused should nudge us towards focusing more on generating possible projects. On a scale-focused view, the hardest part is finding any lever which will have a big impact on the world. Think of a scientist noticing an anomaly which doesn’t fit into their existing theories. If they tried to evaluate whether the effects of understanding the anomaly will be good or bad, they’d find it very difficult to make progress, and maybe stop looking. But if they approach it in a curious way, they’re much more likely to discover levers on the world which nobody else knows about; and then this allows them to figure out what to do.

There are downsides of scaling, though. Right now, EA has short OODA loops because we have a very high concentration of talent, a very high-trust environment, and a small enough community that coordination costs are low. As we try to do more large-scale things, these advantages will slowly diminish; how can we maintain short OODA loops regardless? I’m very uncertain; this is something we should think more about. (One wild guess: we might be the one group best-placed to leverage AI to solve internal coordination problems.)

Part 3: self-improvement and growth mindset

In order to do these ambitious things, we need great people. Broadly speaking, there are two ways to get great people: recruit them, or create them. The tradeoff between these two can be difficult - focusing too much on the former can create a culture of competition and insecurity; focusing too much on the latter can be inefficient and soak up a lot of effort.

In the short term, it seems like there are still low-hanging fruit when it comes to recruitment. But in the longer term, my guess is that EA will need to focus on teaching the skillsets we’re looking for - especially when recruiting high school students or early undergrads. Fortunately, I think there’s a lot of room to do better than existing education pipelines. Part of that involves designing specific programs (like MLAB or AGI safety fundamentals), but probably the more important part involves the culture of EA prioritising learning and growth.

One model for how to do this is the entrepreneurship community. That’s another place where returns are very heavy-tailed, and people are trying to pick extreme winners - and yet it’s surprisingly non-judgemental. The implicit message I get from them is that anyone can be a great entrepreneur, if they try hard enough. That creates a virtuous cycle, because it’s not just a good way to push people to upskill - it also creates the sort of community that attracts ambitious and growth-minded people. I do think learning to be a highly impactful EA is harder in some ways than learning to be a great entrepreneur - we don’t get feedback on how we’re doing at anywhere near the same rate entrepreneurs do, so the strategy of trying fast and failing fast is much less helpful. But there are plenty of other ways to gain skills, especially if you’re in a community which gives you support and motivation to continually improve.

Comments

Popular posts from this blog

In Search of All Souls

25 poems

Book review: Very Important People