EA - Critiques of prominent AI safety labs: Conjecture by Omega

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of prominent AI safety labs: Conjecture, published by Omega on June 12, 2023 on The Effective Altruism Forum.Crossposted to LessWrong.In this series, we consider AI safety organizations that have received more than $10 million per year in funding. There have already been several conversations and critiques around MIRI (1) and OpenAI (1,2,3), so we will not be covering them. The authors include one technical AI safety researcher (>4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section.Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes.Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more).Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which decreases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO previously overstated his accomplishments in 2019 (when an undergrad) (read more);The CEO has been inconsistent over time regarding his position on releasing LLMs (read more).We believe Conjecture has scaled too quickly before demonstrating they have promising research results, and believe this will make it harder for them to pivot in the future (read more).We are concerned that Conjecture does not have a clear plan for balancing profit an...

Visit the podcast's native language site