EA - AI Safety Ideas: An Open AI Safety Research Platform by Apart Research
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Ideas: An Open AI Safety Research Platform, published by Apart Research on October 17, 2022 on The Effective Altruism Forum. TLDR; We present the AI safety ideas and research platform AI Safety Ideas in open alpha. Add and explore research ideas on the website here: aisafetyideas.com. AI Safety Ideas has been accessible for a while in an alpha state (4 months, on-and-off development) and we now publish it in open alpha to receive feedback and develop it continuously with the community of researchers and students in AI safety. All of the projects are either from public sources (e.g. AlignmentForum posts) or posted on the website itself. The current website represents the first steps towards an accessible crowdsourced research platform for easier research collaboration and hypothesis testing. The gap in AI safety Research prioritization & development Research prioritization is hard and even more so in a pre-paradigmatic field like AI safety. We can grok the highest-karma post on the AlignmentForum but is there another way? With AI Safety Ideas, we introduce a collaborative way to prioritize and work on specific agendas together through social features. We hope this can become a scalable research platform for AI safety. Successful examples of less systematized but similar, collaborative, online, and high quality output projects can be seen in Discord servers such as EleutherAI, CarperAI, Stability AI, and Yannic Kilcher’s Discord, in hackathons, and in competitions such as the inverse scaling competition. Additionally, we are also missing an empirically driven impact evaluation of AI safety projects. With the next steps of development described further down, we hope to make this easier and more available while facilitating more iteration in AI safety research. Systemized hypotheses testing with bounties can help funders directly fund specific results and enables open evaluation of agendas and research projects. Mid-career & student newcomers Novice and entrant participation in AI safety research is mostly present in two forms at the moment: 1) Active or passive part-time course participation with a capstone project (AGISF, ML Safety) and 2) flying to London or Berkeley for three months to participate in full-time paid studies and research (MLAB, SERI MATS, PIBBSS, Refine). Both are highly valuable but a third option seems to be missing: 3) An accessible, scalable, low time commitment, open research opportunity. Very few people work in AI safety and allowing decentralized, volunteer or bounty-driven research will allow many more to contribute to this growing field. By allowing this flexible research opportunity, we can attract people who cannot participate in option (2) because of visa, school / life / work commitments, location, rejection, or funding while we can attract a more senior and active audience compared to option (1). Next steps OctReleasing and building up the user base and crowdsourced content. Create an insider build to test beta features. Apply to join the insider build here.NovImplementing hypothesis testing features: Creating hypotheses, linking ideas and hypotheses, adding negative and positive results to hypotheses. Creating an email notification system.DecCollaboration features: Contact others interested in the same idea and mentor ideas. A better commenting system with a results comment that can indicate if the project has been finished or not, what the results are, and by who was it done.JanAdding moderation features: Accepting results, moderating hypotheses, admin users. Add bounty features for the hypotheses and a simple user karma system.FebShare with ML researchers and academics in EleutherAI and CarperAI. Implement the ability to create special pages with specific private and public ideas curated for a specific purpose (title and desc...
