EA - Do better, please ... by Rohit is a Strange Loop

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Podcast artwork

Kategorien:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do better, please ..., published by Rohit is a Strange Loop on January 15, 2023 on The Effective Altruism Forum.I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.And not only is this just wrong, it's counterproductive.If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing w...

Visit the podcast's native language site