EA - EA should blurt by RobBensinger
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should blurt, published by RobBensinger on November 22, 2022 on The Effective Altruism Forum.A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all."Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.Things usually feel like "red flags" after you're suspicious, rather than before.You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early.(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."Which seems to me to be missing the point:1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.So...
