EA - "No-one in my org puts money in their pension" by tobyj

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "No-one in my org puts money in their pension", published by tobyj on February 16, 2024 on The Effective Altruism Forum.Epistemic status: the stories here are all as true as possible from memory, but my memory is so so.This is going to be bigIt's late Summer 2017. I am on a walk in the Mendip Hills. It's warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We've travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don't know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind.He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it to. I ask him if we are going to build artificial intelligence that can do anything that a human can do. "Yes, and soon," he says, "And it will be the most important thing that humanity has ever done."I find this surprising. It would be very weird if humanity was on the cusp of the most important world changing invention ever, and so few people were seriously talking about it. I don't really believe him.This is going to be badIt is mid-Summer 2018 and I am cycling around Richmond Park in South West London. It's very hot and I am a little concerned that I am sweating off all my sun cream.After having many other surprising conversations about AI, like the one I had in the Mendips, I have decided to read more about it. I am listening to an audiobook of Superintelligence by Nick Bostrom. As I cycle in loops around the park, I listen to Bostrom describe a world in which we have created superintelligent AI. He seems to think the risk that this will go wrong is very high. He explains how scarily counterintuitive the power of an entity that is vastly more intelligent than a human is.He talks about the concept of "orthogonality"; the idea that there is no intrinsic reason that the intelligence of a system is related to its motivation to do things we want (e.g. not kill us). He talks about how power-seeking is useful for a very wide range of possible goals. He also talks through a long list of ways we might try to avoid it going very wrong. He then spends a lot of time describing why many of these ideas won't work. I wonder if this is all true.It sounds like science fiction, so while I notice some vague discomfort with the ideas, I don't feel that concerned. I am still sweating, and am quite worried about getting sunburnt.It's a long way off thoughIt's still Summer 2018 and I am in an Italian restaurant in West London. I am at an event for people working in policy who want to have more impact. I am talking to two other attendees about AI. Bostrom's arguments have now been swimming around my mind for several weeks. The book's subtitle is "Paths, Dangers, Strategies" and I have increasingly been feeling the weight of the middle one. The danger feels like a storm. It started as vague clouds on the horizon and is now closing in.I am looking for shelter."I just don't understand how we are going to set policy to manage these things" I explain.I feel confused and a little frightened.No-one seems to have any concrete policy ideas. But my friend chimes in to say that while yeah there's a risk, it's probably pretty small and far away at this point."Experts thinks it'll take at least 40 more years to get really powerful AI" she explains, "there is plenty of time for us to figure this out".I am not totally reassured, but the clouds retreat a little.This is fineIt is late January 2020 and I am at after-work drinks in a pub in Westminster. I am talking to a few colleagues about the news. One of my colleagues, an accomplished government ec...

Visit the podcast's native language site