EA - Partial Transcript of Recent Senate Hearing Discussing AI X-Risk by Daniel Eth

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Partial Transcript of Recent Senate Hearing Discussing AI X-Risk, published by Daniel Eth on July 27, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing on AI. The hearing involved 3 witnesses - Dario Amodei (CEO of Anthropic), Yoshua Bengio (Turing Award winner, and the second-most cited AI researcher in the world), and Stuart Russell (Professor of CS at Berkeley, and co-author of the standard textbook for AI).The hearing wound up focusing a surprising amount on AI X-risk and related topics. I originally planned on jotting down all the quotes related to these topics, thinking it would make for a short post of a handful of quotes, which is something I did for a similar hearing by the same subcommittee 2 months ago. Instead, this hearing focused so much on these topics that I wound up with something that's better described as a partial transcript.All the quotes below are verbatim. Text that is bolded is simply stuff I thought readers might find particularly interesting. If you want to listen to the hearing, you can do so here (it's around 2.5 hours). You might also find it interesting to compare this post to the one from 2 months ago, to see how the discourse has progressed.Opening remarksSenator Blumenthal:What I have heard [from the public after the last AI hearing] again and again and again, and the word that has been used so repeatedly is 'scary.' 'Scary'. What rivets [the public's] attention is the science-fiction image of an intelligence device, out of control, autonomous, self-replicating, potentially creating diseases - pandemic-grade viruses, or other kinds of evils, purposely engineered by people or simply the result of mistakes. And, frankly, the nightmares are reinforced in a way by the testimony that I've read from each of you.I think you have provided objective, fact-based views on what the dangers are, and the risks and potentially even human extinction - an existential threat which has been mentioned by many more than just the three of you, experts who know first hand the potential for harm. But these fears need to be addressed, and I think can be addressed through many of the suggestions that you are making to us and others as well.I've come to the conclusion that we need some kind of regulatory agency, but not just a reactive body. actually investing proactively in research, so that we develop countermeasures against the kind of autonomous, out-of-control scenarios that are potential dangers: an artificial intelligence device that is in effect programmed to resist any turning off, a decision by AI to begin nuclear reaction to a nonexistent attack.The White House certainly has recognized the urgency with a historic meeting of the seven major companies which made eight profoundly significant commitments. but it's only a start. The urgency here demands action.The future is not science fiction or fantasy - it's not even the future, it's here and now. And a number of you have put the timeline at 2 years before we see some of the biological most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace, because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is an inexorable flow of development.Building on our previous hearing, I think there are core standards that we are building bipartisan consensus around. And I welcome hearing from many others on these potential rules:Establishing a licensing regime for companies that are engaged in high-risk AI development;A testing and auditing regimen by objective 3rd parties or by preferably the new entity that we will establish;Imposing legal limits on certain uses related to elections. related to...

Visit the podcast's native language site