EA - Correctly Calibrated Trust by ChanaMessinger

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Correctly Calibrated Trust, published by ChanaMessinger on June 24, 2023 on The Effective Altruism Forum.This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this.Interested in whether this resonates with people's experience!Short version:[Just read the bold to get a really short version]There’s a lot of “social sense of trust” in EA, in my experience. There’s a feeling that people, organizations and projects are broadly good and reasonable (often true!) that’s based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that it’s likely common to overweight those signals of approval and the absence of disapproval.Especially post-FTX, I’d like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely.[“Trust” here is a fuzzy and under-defined thing that I’m not going to nail down - I mean here something like a general sense that things are fine and going well]Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant people’s views on the people, projects and organizations in question.Negative opinions (anywhere from “that person not so great” to “that organization potentially quite sketch, but I don't have any details”) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we don’t have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. It’s easy to overestimate what you would know if there’s a bad thing to know.If it’s decision relevant or otherwise important to know how much to trust a person or organization, I think it’s a mistake to rely heavily on the above indicators, or on the “general feeling” in EA. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate.I’m pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help.Longer version:I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosityPart 1: What fuzzy proxies are people using and why would they be systematically overweighted?(I don’t know how common these mistakes are, or that they apply to you, the specific reader of the post. I expect them to bite harder if you’re newer or less connected, but I also expect that it’s easy to be somewhat biased in the same directions even if you have a lot of context. I’m hoping this serves as contextualization for the former and a reminder / nudge for the latter.)Getting funding from OP and LTFFSeems easy to expect that if someone got funding from Open Phil or the Long Term Future Fund, that’s a reasonable signal about the value of their work or the competence or trustworthiness or other virtues of the person running it. It obviously is Bayesian evidence, but I expect this to be extremely noisy.These organisations engage in hits-based philanthropy - as I understand it, they don’t expect most of the grants they make to be especially valuable (but the amount and way this is true varies by funder - Linch describes...

Visit the podcast's native language site