EA - Why some people disagree with the CAIS statement on AI by David Moss

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why some people disagree with the CAIS statement on AI, published by David Moss on August 15, 2023 on The Effective Altruism Forum.SummaryPrevious research from Rethink Priorities found that a majority of the population (59%) agreed with a statement from the Center for AI Safety (CAIS) that stated "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." 26% of the population disagreed with this statement. This research piece does further qualitative research to analyze this opposition in more depth.The most commonly mentioned theme among those who disagreed with the CAIS statement was that other priorities were more important (mentioned by 36% of disagreeing respondents), with climate change particularly commonly mentioned.This theme was particularly strongly occurring among younger disagreeing respondents (43.3%) relative to older disagreeing respondents (27.8%).The next most commonly mentioned theme was rejection of the idea that AI would cause extinction (23.4%), though some of these respondents agreed AI may pose other risks.Another commonly mentioned theme was the idea that AI was not yet a threat, though it might be in the future.This was commonly co-occurring with the 'Other priorities' theme, with many arguing that other threats were more imminent.Other less commonly mentioned themes included that AI would be under our control (8.8%) and so would not pose a threat, while another was that AI was not capable of causing harm, because it was not sentient or sophisticated or autonomous (5%).IntroductionOur previous survey on attitudes on US public perception of the CAIS statement on AI risk found that a majority of Americans agree with the statement (59%), while a minority (26%) disagreed. To gain a better understanding of why individuals might disagree with the statement, we ran an additional survey, where we asked a new sample of respondents whether they agreed or disagreed with the statement, and then asked them to explain why they agreed or disagreed. We then coded the responses of those who disagreed with the statement to identify major recurring themes in people's comments. We did not formally analyze comments from those who did not disagree with the statement, though may do so in a future report.Since responses to this question might reflect responses to the specifics of the statement, rather than more general reactions to the idea of AI risk, it may be useful to review the statement before reading about the results."Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."Common themesThis section outlines the most commonly recurring themes. In a later section of this report we'll discuss each theme in more detail and provide examples from each. It is important, when interpreting these percentages to remember that they are percentages of those 31.2% respondents who disagreed with the statement, not of all respondents.The dominant theme, by quite a wide margin, was the claim that 'Other priorities' were more important, which was mentioned by 36% of disagreeing respondents. The next most common theme was 'Not extinction', mentioned in 23.4% of responses, which simply involved respondents asserting that they did not believe that AI would cause extinction. The third most commonly mentioned theme was 'Not yet', which involved respondents claiming that AI was not yet a threat or something to worry about. The 'Other priorities' and 'Not yet' themes were commonly co-occurring, mentioned together by 7.9% of respondents, more than any other combination.Some less commonly mentioned themes were 'Control', the idea that AI could not be a threat because it would inevitably be under our...

Visit the podcast's native language site