EA - “My Model Of EA Burnout” (Logan Strohl) by will

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Podcast artwork

Kategorien:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “My Model Of EA Burnout” (Logan Strohl), published by will on February 3, 2023 on The Effective Altruism Forum.(Linkposting with permission from the author, Logan Strohl. Below, my - Will's - excerpted summary of the post precedes the full text. The first person speaker is Logan.)SummaryI think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact.I’ve oriented my whole life around The Should Values for my longtermist EA strategy [...] while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.It seems awfully important to me that EAs put fuel into their gas tanks, rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years.My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact.Full Text(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)I think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):True ValuesAbundancePowerNoveltySocial HarmonyBeautyGrowthComfortThe Wellbeing Of OthersExcitementPersonal LongevityAccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a ...

Visit the podcast's native language site