EA - What is it like doing AI safety work? by Kat Woods
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum.How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst?To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail.The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field.What do people do all day?John WentworthJohn describes a few different categories of days.He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already.He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful.He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem.Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc.John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.Ondrej BajgarOndrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.Scott EmmonsScott talked about a few different categories of day-to-day work:Research, which involves brainstorming, programming, writing & communicating, and collaborating with peopleReading papers to stay up-to-date with the literatureAdministrative workService, such as giving advice to undergrads, talking about AI safety, and reviewing other...
