EA - University EA Groups Need Fixing by Dave Banerjee

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: University EA Groups Need Fixing, published by Dave Banerjee on August 3, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don't claim to have a complete picture of the university group ecosystem.Disclaimer: I've written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.My EA ExperienceDuring my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right - it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I'd hear these ideas, I didn't really know what to do besides nod my head and occasionally say "that makes sense." After each one-on-one, I knew that I shouldn't update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn't help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don't prioritize AI safety, the burden of proof is on me to explain why I "dissent" from EA. If you're a longtermist AI safety person, there's no need to offer evidence to defend your view.(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets su...

Visit the podcast's native language site