EA - US public perception of CAIS statement and the risk of extinction by Jamie Elsey
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public perception of CAIS statement and the risk of extinction, published by Jamie Elsey on June 22, 2023 on The Effective Altruism Forum.SummaryOn June 2nd-June 3rd 2023, Rethink Priorities conducted an online poll of US adults, to assess their views regarding a recent open statement from the Center for AI Safety (CAIS). The statement, which has been signed by a number of prominent figures in the AI industry and AI research communities, as well as other public figures, was:âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.âThe goal of this poll was to determine the degree to which the American publicâs views of AI risk align with this statement.The poll covered opinions regarding:Agreement/disagreement with the CAIS open statementSupport for/opposition to the CAIS open statementWorry about negative effects of AIPerceived likelihood of human extinction from AI by the year 2100Our population estimates reflect the responses of 2407 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsAttitudes towards the CAIS statement were largely positive. A majority of the population supports (58%) and agrees with (59%) the CAIS statement, relative to 22% opposition and 26% disagreement.Worry about AI remains low. We estimate that most (68%) US adults would say that, at most, they only worry a little bit in their daily lives about the possible negative effects of AI on their lives or society more broadly. This is similar to our estimate in April (where 71% were estimated to have this level of worry).The public estimates of the chance of extinction from AI are highly skewed, with the most common estimate around 1%, but substantially higher medians and means. We estimate that half the population would give a probability below 15%, and half would give a probability above 15%. The most common response is expected to be around 1%, with 13% of people saying there is no chance. However, the mean estimate for the chance of extinction from AI by 2100 is quite high, at 26%, owing to a long tail of people giving higher ratings. It should be noted that just because respondents provided ratings in the form of probabilities, it does not mean they have a full grasp of the exact likelihoods their ratings imply.Attitudes towards the CAIS statementRespondents were presented with the CAIS statement on AI risk, and asked to indicate both the extent to which they agreed/disagreed with it, and the extent to which they supported/opposed it. We estimate that the US population broadly agrees with (59%) and supports (58%) the statement. Disagreement (26%) and opposition (22%) were relatively low, and sizable proportions of people remained neutral (12% and 18% for agreement and support formats, respectively).It is important to note that agreement with or support of this statement may not translate to agreement with or support of more specific policies geared towards actually making AI risk a comparable priority to pandemics or nuclear weapons. People may also support certain concrete actions that serve to mitigate AI risk despite not agreeing that it is of comparable concern to pandemics or nuclear security.The level of agreement/support appears to vary with age: the youngest age brackets of 18-24 are expected to show the most disagreement with/opposition to the statement. However, all ages were still expected to have majority support for the statement.Perceived likelihood of human extinction from AIWe were interested to understand how likely the public believed the risk of extinction from AI to be. In our previous survey of AI-related attitudes and beliefs, we asked...