EA - On focusing resources more on particular fields vs. EA per se - considerations and takes by Ardenlk

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On focusing resources more on particular fields vs. EA per se - considerations and takes, published by Ardenlk on June 24, 2023 on The Effective Altruism Forum.Epistemic status: This post is an edited version of an informal memo I wrote several months ago. I adapted it for the forum at the prompting of EA strategy fortnight. At the time of writing I conceived of its value as mostly in laying out considerations / trying to structure a conversation that felt a bit messy to me at the time, though I do give some of my personal takes too.I went back and forth a decent amount about whether to post this - I'm not sure about a lot of it. But some people I showed it to thought it would be good to post, and it feels like it's in the spirit of EA strategy fortnight to have a lower bar for posting, so I'm going for it.Overall takeSome people argue that the effective altruism community should focus more of its resources on building cause-specific fields (such as AI safety, biosecurity, global health, and farmed animal welfare), and less on effective altruism community building per se. I take the latter to mean something like: community building around the basic ideas/principles, and which invests in particular causes always with a more tentative attitude of "we're doing this only insofar as/while we're convinced this is actually the way to do the most good." (I'll call this "EA per se" for the rest of the post.)I think there are reasons for some shift in this direction. But I also have some resistance to some of the arguments I think people have for it.My guess is thatAllocating some resources from "EA per se" to field-specific development will be an overall good thing, butMy best guess (not confident) is that a modest reallocation is warranted, andI worry some reasons for reallocation are overrated.In this post I'llArticulate the reasons I think people have for favouring shifting resources in this way (just below), and give my takes on them (this will doubtless miss some reasons).Explain some reasons in favour of continuing (substantial) support for EA per se.Reasons I think people might have for a shift away from EA per se, and my quick takes on them1. The reason: The EA brand is (maybe) heavily damaged post FTX — making building EA per se less tractable and less valuable because getting involved in EA per se now has bigger costs.My take: I think how strong this is basically depends on how people perceive EA now post-FTX, and I'm not convinced that the public feels as badly about it as some other people seem to think. I think it's hard to infer how people think about EA just by looking at headlines or Twitter coverage about it over the course of a few months. My impression is that lots of people are still learning about EA and finding it intuitively appealing, and I think it's unclear how much this has changed on net post-FTX.Also, I think EA per se has a lot to contribute to the conversation about AI risk — and was talking about it before AI concern became mainstream — so it's not clear it makes sense to pull back from the label and community now.I'd want someone to look at and aggregate systematic measures like subscribers to blogs, advising applications at 80,000 Hours, applications to EA globals, people interested in joining local EA groups, etc. (As far as I know as of quickly revising this in June, these systematic measures are actually going fairly strong, but I have not really tried to assess this. These survey responses seem like a mild positive update on public perceptions.)Overall, I think this is probably some reason in favour of a shift but not a strong one.2. The reason: maybe building EA per se is dangerous because it attracts/boosts actors like SBF. (See: Holden's last bullet here)My take: My guess is that this is a weak-ish reason – though I...

Visit the podcast's native language site