Update on the AISF Grantmaking and Upcoming Funding Opportunity
11 November 2024AISF Communications:
Update on the AISF Grantmaking and Upcoming Funding Opportunity
October 2024
Announcing the First Disbursements of the AI Safety Fund
The Artificial Intelligence Safety Fund (the AISF) is excited to announce that the first grant awards have been issued to AI safety researchers. The AISF invited grant proposals for research on novel methods to evaluate frontier models. Twelve grantees across four countries– the United States, the United Kingdom, South Africa, and Switzerland – received funding. Grants range from USD 150,000 to USD 400,000, with a total disbursement of over $3 million USD. For information about our grantees, brief introductions to the researchers and their work are provided at the bottom of this announcement.
Solicitation, Evaluation, and Selection of First AISF Grants
The AISF awarded its first round of grants through a targeted solicitation, inviting selected applicants to submit proposals. The AISF partnered with diverse expert reviewers, including both technical experts from industry and independent third-party researchers, to ensure a comprehensive evaluation of submitted proposals. Each proposal was rigorously assessed for both the quality and feasibility of the proposed research, with careful attention given to any potential risks that could emerge from the results. Guided by expert advice, the AISF curated a well-rounded portfolio and selected twelve projects for funding, strategically targeting critical areas in AI safety such as biological safety evaluations. Grants were awarded in Summer 2024, with research findings set to be publicly available, so long as public release does not introduce information hazards or pose related safety concerns.
AISF is welcoming proposals for the next round of grants
In November 2024, the AISF will welcome a second round of research proposals from qualified researchers. The next round of funding will address priority research needs identified by AISF Funders, including methodologies to address biosecurity and cybersecurity risks. Additional topics may be added in 2025. Requests for proposals will be posted to the AISF website in November and applications will be due in January. Successful applicants will be notified by April 2025. The AI Safety Fund supports research on state-of-the-art, general purpose AI models. Funding will only be awarded to projects researching deployed versions of those models. We welcome researchers to share information about their work on the safety of Frontier AI Models here. If you would like to be added to our mailing list, please fill out the Get in Touch! form on our website.
About the AI Safety Fund
The AI Safety Fund (AISF) is a $10 million+ initiative, born from a collaborative vision of leading AI developers and philanthropic partners. Initial Funding for the AI Safety Fund came from leading technology companies Anthropic, Google, Microsoft, and OpenAI as well as philanthropic partners the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Schmidt Sciences, and Jaan Tallin. The AISF works in close collaboration with the Frontier Model Forum.
Administered independently by Meridian Prime, the AISF awards research grants to independent researchers to address some of the most critical safety risks associated with the proliferated use of frontier AI systems. The AISF recognizes that industry leaders are uniquely positioned to identify high-priority research needs that promote the safe and secure deployment of AI. The Funders and partners have been thoughtful advisors to guide the fund toward the most compelling needs in AI research.
The purpose of the fund is to support and expand the field of AI safety research to promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. We seek to attract and support the brightest minds across the AI ecosystem to advance frontier models in alignment with human values.