EA - Large epistemological concerns I should maybe have about EA a priori by Luise

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Large epistemological concerns I should maybe have about EA a priori, published by Luise on June 7, 2023 on The Effective Altruism Forum.NoteI originally wrote this as a private doc, but I thought maybe it's valuable to publish. I've only minimally edited it.Also, I now think the epistemological concerns listed below aren't super clearly carved and have a lot of overlap. The list was never meant to be a perfect carving, just to motion at the shape of my overall concerns, but even so, I'd write it differently if I was writing it today.MotivationFor some time now, I’ve wanted nothing more than to finish university and just work on EA projects I love. I’m about to finish my third year of university and could do just that. A likely thing I would work on is alignment field-building, e.g., helping to run the SERI MATS program again. (In this doc, will use alignment field-building as the representative of all the community building/operations-y projects I’d like to work on, for simplicity.)However, in recent months, I have become more careful about how I form opinions. I am more truthseeking and more epistemically modest (but also more hopeful that I can do more than blind deferral in complex domains). I now no longer endorse the epistemics (used here broadly as “ways of forming beliefs”) that led me to alignment field-building in the first place. For example, I think this in part looked like “chasing cool, weird ideas that feel right to me” and “believing whatever high-status EAs believe”.I am now deeply unsure about many assumptions underpinning the plan to do alignment field-building. I think I need to take some months to re-evaluate these assumptions.In particular, here are the questions I feel I need to re-evaluate:1. What should my particular takes about particular cause areas (chiefly alignment) and about community building be?My current takes often feel immodest and/or copied from specific high-status people. For example, my takes on which alignment agendas are good are entirely copied from a specific Berkeley bubble. My takes on the size of the “community building multiplier” are largely based on quite immodest personal calculations, disregarding that many “experts” think the multiplier is lower.I don’t know what the right amount of immodesty and copying from high-status people is, but I’d like to at least try to get closer.2. Is the “EA viewpoint” on empirical issues (e.g., on AI risk) correct (because we are so smart)?Up until recently I just assumed (a part of) EA is right about large empirical questions like “How effectively-altruistic is ‘Systemic Change’?”, “How high are x-risks?” and “Is AI an x-risk?”. (“Empirical” as opposed to “moral”.) First, this was maybe a naïve kind of tribalistic support, later because of the “superior epistemics” of EAs. The poster version of this is “Just believe whatever Open Phil says”.Here’s my concern: In general, people adopt stories they like on big questions, e.g., the capitalism-is-cancer-and-we-need-to-overhaul-the-system story or the AI-will-change-everything-tech-utopia story. People don’t seek out all the cruxy information and form credences to actually get closer to the truth. I used to be fine just to back “a plausible story of how things are”, as I suspect many EAs are. But now I want to back the correct story of how things are.I’m wondering if the EA/Open Phil worldview is just a plausible story. This story probably contains a lot of truthseeking and truth on lower-level questions, such as “How effective is deworming?”. But on high-level questions such as “How big a deal is AGI?”, maybe it is close to impossible not just to believe in a story and instead do the hard truthseeking thing. Maybe that would be holding EA/Open Phil to an impossible standard. I simply don’t know currently if EA/Open Phil ep...

Visit the podcast's native language site