EA - An appeal to people who are smarter than me: please help me clarify my thinking about AI by bethhw
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An appeal to people who are smarter than me: please help me clarify my thinking about AI, published by bethhw on August 6, 2023 on The Effective Altruism Forum.Hi,As a disclaimer, this will not be as eloquent or well-informed as most of the other posts on this forum. I'm something of an EA lurker who has a casual interest in philosophy but is wildly out of her intellectual depth on this forum 90% of the time. I'm also somewhat prone to existential anxiety and have a tendency to become hyper-fixated on certain topics - and recently had the misfortune of falling down the AI safety internet rabbit hole.It all started when I used ChatGPT for the first time and started to become concerned that I might lose my (content writing) job to a chatbot. My company then convened a meeting where they reassured as all that despite recent advances in AI, they would continue taking a human-led approach to content creation 'for now' (which wasn't as comforting as they probably intended).In a move I now somewhat regret, I decided my best bet would be to find out as much about the topic as I could. This was around the time that Geoffrey Hinton stepped down from Google, so the first thing I encountered was one of his media appearances. This quickly updated me from 'what if AI takes my job' to 'what if AI kills me'. I was vaguely familiar with the existential risk from AI scenarios already, but had considered them far off enough the the future to not really worry about.In looking for less bleak perspectives than Hinton's, I managed to find the exact opposite (ie that Bankless episode with Eliezer Yudkowsky). From there I was introduced to whole cast of similarly pessimistic AI researchers predicting the imminent extinction of humanity with all the confidence of fundamentalist Christians awaiting the rapture (I'm sure I don't have to name them here - also I apologise if any of you reading this are the aforementioned researchers, I don't mean this to be disparaging in any way - this was just my first impression as one of the uninitiated).I'll be honest and say that I initially thought I'd stumbled across some kind of doomsday cult. I assumed there must be some more moderate expert consensus that the more extreme doomers were diverging from. I spent a good month hunting for the well-established body of evidence projecting a more mundane, steady improvement of technology, where everything in 10 years would be kinda like now but with more sophisticated LLMs and an untold amount of AI-generated spam clogging up the internet. Hours spent scanning think-pieces and news reports for the magic words 'while a minority of researchers expect worst-case scenarios, most experts believe.'. But 'most experts' were nowhere to be found.The closest I could find to a reasonably large sample size was that 2022 (?) survey that gave rise to the much-repeated statistic about half of ML researchers placing a >10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn't help but think that these seemed like strange differences of opinion to be focused on, when you don't need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I'm not ashamed t...