EA - Correctly Calibrated Trust by ChanaMessinger
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Correctly Calibrated Trust, published by ChanaMessinger on June 24, 2023 on The Effective Altruism Forum.This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this.Interested in whether this resonates with people's experience!Short version:[Just read the bold to get a really short version]Thereâs a lot of âsocial sense of trustâ in EA, in my experience. Thereâs a feeling that people, organizations and projects are broadly good and reasonable (often true!) thatâs based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that itâs likely common to overweight those signals of approval and the absence of disapproval.Especially post-FTX, Iâd like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely.[âTrustâ here is a fuzzy and under-defined thing that Iâm not going to nail down - I mean here something like a general sense that things are fine and going well]Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant peopleâs views on the people, projects and organizations in question.Negative opinions (anywhere from âthat person not so greatâ to âthat organization potentially quite sketch, but I don't have any detailsâ) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we donât have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. Itâs easy to overestimate what you would know if thereâs a bad thing to know.If itâs decision relevant or otherwise important to know how much to trust a person or organization, I think itâs a mistake to rely heavily on the above indicators, or on the âgeneral feelingâ in EA. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate.Iâm pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help.Longer version:I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosityPart 1: What fuzzy proxies are people using and why would they be systematically overweighted?(I donât know how common these mistakes are, or that they apply to you, the specific reader of the post. I expect them to bite harder if youâre newer or less connected, but I also expect that itâs easy to be somewhat biased in the same directions even if you have a lot of context. Iâm hoping this serves as contextualization for the former and a reminder / nudge for the latter.)Getting funding from OP and LTFFSeems easy to expect that if someone got funding from Open Phil or the Long Term Future Fund, thatâs a reasonable signal about the value of their work or the competence or trustworthiness or other virtues of the person running it. It obviously is Bayesian evidence, but I expect this to be extremely noisy.These organisations engage in hits-based philanthropy - as I understand it, they donât expect most of the grants they make to be especially valuable (but the amount and way this is true varies by funder - Linch describes...