SCHEDULE
Application period |
November 18, 2024 – January 20, 2025 |
Question and Answer period |
Questions Due November 25, 2024
Responses posted by December 9, 2024 |
Eligibility and Review period |
January 21, 2025 – February 28, 2025 |
Award Notifications |
April 2025 |
ELIGIBILITY
Specific qualifications vary for the Biosecurity and Cybersecurity opportunities; applicants should carefully review qualification and eligibility requirements to ensure review of their application.
The AISF makes grants for independent researchers affiliated with academic institutions, research institutions, NGOs, and social enterprises across the globe that aim to promote the safe and responsible development of frontier models by testing, evaluating, and/or addressing safety and security risks. The AISF seeks to fund research that accelerates the identification of threats posed by the development and use of AI to prevent widespread harm.
The AISF is unable to award grants to the following countries due to applicable US sanctions:
The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) imposes restrictions on services and transactions with individuals or entities located in countries subject to comprehensive U.S. sanctions. As a result, the AISF cannot provide grants to:
- Cuba
- Iran
- North Korea
- Syria
- Russia
- Regions of Ukraine: Crimea, Donetsk, Luhansk
- Belarus
Please note: The list of sanctioned countries and activities may change at any time without prior notice, in accordance with OFAC regulations.
EVALUATION CRITERIA
The AISF will engage support from third-party, technical reviewers to evaluate proposals that meet the eligibility requirements. Some proposals may be reviewed, in part, by representatives of the Firms if there is a need to verify feasibility of a proposal. Only relevant information to determine feasibility will be shared with industry partners.
All proposals will be reviewed based on 8 criteria. Specific considerations for Biosecurity and Cybersecurity are outlined in the RFPs.
Impact to improve safety measures for deployed frontier AI models. |
Feasibility of the proposed research. |
Relevance of the proposed research for the field of AI safety. |
Peer Review and the research team’s approach to engaging the broader research community to provide feedback on the research findings. |
Technical qualifications of the research team, and relevance for the domain under which the application is submitted. |
Ethics of the research, and the specific safety protocols that will be employed to manage immediate and long-term risk implications of the research findings. |
Equity will be considered as a metric of how the research project will support advancing equity and diversity in the field. |
Accessibility of the research findings to promote transparency in the field. Researchers should also consider potential risk or harm that could result from research findings, and provide justification for limiting access of the research findings. |