EA - Longtermism Fund: August 2023 Grants Report by Michael Townsend
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: August 2023 Grants Report, published by Michael Townsend on August 20, 2023 on The Effective Altruism Forum.IntroductionIn this grants report, the Longtermism Fund team is pleased to announce that the following grants have been recommended by Longview and are in the process of being disbursed:Two grants promoting beneficial AI:Supporting AI interpretability work conducted by Martin Wattenberg and Fernanda Viegas at Harvard University ($110,000 USD)Funding AI governance work conducted by the evaluations project at the Alignment Research Center ($220,000 USD)Two biosecurity and pandemic prevention grants:Supporting a specific project at NTI | Bio working to disincentivize biological weapons programmes ($100,000 USD)Partially funding the salary of a Director of Research and Administration for the Center for Communicable Disease Dynamics ($80,000 USD)One grant improving nuclear security:Funding a project by the Carnegie Endowment for International Peace to better understand and advocate for policies that avoid escalation pathways to nuclear war ($52,000 USD).This report will provide information on what the grants will fund, and why they were made. It was written by Giving What We Can, which is responsible for the Fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.We would also like to acknowledge and apologise for the report being released two months later than we would have liked, in part due to delays in the process of disbursing these grants. In future, we will aim to take potential delays into account so that we can better keep to our target of releasing a report once every six months.Scope of the FundThese grants were decided by the general grantmaking process outlined in our previous grants report and the Fund's launch announcement.As a quick summary, the Fund supports work that:Reduces existential and catastrophic risks, such as those coming from misaligned artificial intelligence, pandemics, and nuclear war.Promotes, improves, and implements key longtermist ideas.In addition, the Fund focuses on organisations with a compelling and transparent case in favour of their cost-effectiveness, and/or that will benefit from being funded by a large number of donors. Longview Philanthropy decides the grants and allocations based on its past and ongoing work to evaluate organisations in this space.GranteesAI interpretability work at Harvard University - $110,000This grant is to support the work of Martin Wattenberg and Fernanda Viegas to develop their AI interpretability work at Harvard University. The grant aims to fund research that enhances our understanding of how modern AI systems function - better understanding how these systems work is among the more straightforward ways we can ensure these systems are safe. Profs. Wattenberg and Viegas have a strong track record (with both having excellent references from other experts) and their future plans are likely to advance the interpretability field.Longview: "We recommended a grant of $110,000 to support Martin Wattenberg and Fernanda Viegas' interpretability work on the basis of excellent reviews of their prior work. These funds will go primarily towards setting up a compute cluster and hiring graduate students or possibly postdoctoral fellows."Learn more about this grant.ARC Evals - $220,000The evaluations project at the Alignment Research Center ("ARC Evals") works on "assessing whether cutting-edge AI systems could pose catastrophic risks to civilization." ARC Evals is contributing to the following AI governance approach:Before a new large-scale system is released, assess whether it is capable of potentially catastrophic activities.If so, require strong guarantees that the system will not carry out such activities.ARC Evals wor...