Skip to content

Funding Opportunities

Click Apply Now to submit your proposal.

Apply Now

The AISF is now accepting grant applications for research funding.

The AI Safety Fund (AISF) awards grants to accelerate research efforts that identify potential safety threats that arise from the development and use of frontier AI models. As AI is increasingly applied across domains, we are looking to develop strategies to avoid negative outcomes through funding research to identify and assess risks and improve the safe deployment of AI for the benefit of society.

To further these goals, the AISF also seeks to support technical research for AI agent identity verification systems and AI agent safety evaluations. This funding aims to promote the safe and responsible development of AI agents while establishing robust frameworks for agent authentication and verification and synthetic media authentication. 

In this funding round, we are prioritizing three critical research areas: Biosecurity, Cybersecurity and AI Agent Evaluation & Synthetic Content. For more details on the research priorities for each domain, please review the RFP associated with this call for proposals.

CYBERSECURITY RFP

Schedule

Application period November 18, 2024 – January 20, 2025
Question and Answer period Questions Due November 25, 2024
Responses posted by December 9, 2024
Eligibility and Review period January 21, 2025 – February 28, 2025
Award Notifications April 2025

 

Update on Submitted Questions

The AISF team has reviewed the clarifying questions submitted before November 25, 2024, and provided detailed responses. You can access the answers here.

 

Eligibility

Specific qualifications vary for the Cybersecurity opportunity; applicants should carefully review qualification and eligibility requirements to ensure review of their application. 

The AISF makes grants for independent researchers affiliated with academic institutions, research institutions, NGOs, and social enterprises across the globe that aim to promote the safe and responsible development of frontier models by testing, evaluating, and/or addressing safety and security risks. The AISF seeks to fund research that accelerates the identification of threats posed by the development and use of AI to prevent widespread harm. 

The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) imposes restrictions on services and transactions with individuals or entities located in countries subject to comprehensive U.S. sanctions. As a result, due to applicable US sanctions, the AISF is unable to award grants to the following countries: 

  • Cuba
  • Iran
  • North Korea
  • Syria
  • Russia
  • Regions of Ukraine: Crimea, Donetsk, Luhansk
  • Belarus 

Please note: The list of sanctioned countries and activities may change at any time without prior notice, in accordance with OFAC regulations. 

 

Evaluation Criteria

The AISF will engage support from third-party, technical reviewers to evaluate proposals that meet the eligibility requirements. Some proposals may be reviewed, in part, by representatives of the Firms if there is a need to verify feasibility of a proposal. Only relevant information to determine feasibility will be shared with industry partners. 

All proposals will be reviewed based on 8 criteria. Specific considerations for Cybersecurity are outlined in the RFP. 

Impact to improve safety measures for deployed frontier AI models. 
Feasibility of the proposed research. 
Relevance of the proposed research for the field of AI safety. 
Peer Review and the research team’s approach to engaging the broader research community to provide feedback on the research findings. 
Technical qualifications of the research team, and relevance for the domain under which the application is submitted. 
Ethics of the research, and the specific safety protocols that will be employed to manage immediate and long-term risk implications of the research findings. 
Equity will be considered as a metric of how the research project will support advancing equity and diversity in the field. 
Accessibility of the research findings to promote transparency in the field. Researchers should also consider potential risk or harm that could result from research findings, and provide justification for limiting access of the research findings.

 

 

→ Submit your proposals for Cybersecurity here
by January 20, 2025

BIOSECURITY RFP

Schedule

Application period November 18, 2024 – January 20, 2025
Question and Answer period Questions Due November 25, 2024
Responses posted by December 9, 2024
Eligibility and Review period January 21, 2025 – February 28, 2025
Award Notifications April 2025

 

Update on Submitted Questions

The AISF team has reviewed the clarifying questions submitted before November 25, 2024, and provided detailed responses. You can access the answers here.

 

Eligibility

Specific qualifications vary for the Biosecurity opportunity; applicants should carefully review qualification and eligibility requirements to ensure review of their application. 

The AISF makes grants for independent researchers affiliated with academic institutions, research institutions, NGOs, and social enterprises across the globe that aim to promote the safe and responsible development of frontier models by testing, evaluating, and/or addressing safety and security risks. The AISF seeks to fund research that accelerates the identification of threats posed by the development and use of AI to prevent widespread harm. 

The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) imposes restrictions on services and transactions with individuals or entities located in countries subject to comprehensive U.S. sanctions. As a result, due to applicable US sanctions, the AISF is unable to award grants to the following countries: 

  • Cuba
  • Iran
  • North Korea
  • Syria
  • Russia
  • Regions of Ukraine: Crimea, Donetsk, Luhansk
  • Belarus 

Please note: The list of sanctioned countries and activities may change at any time without prior notice, in accordance with OFAC regulations. 

 

Evaluation Criteria

The AISF will engage support from third-party, technical reviewers to evaluate proposals that meet the eligibility requirements. Some proposals may be reviewed, in part, by representatives of the Firms if there is a need to verify feasibility of a proposal. Only relevant information to determine feasibility will be shared with industry partners. 

All proposals will be reviewed based on 8 criteria. Specific considerations for Biosecurity are outlined in the RFP. 

Impact to improve safety measures for deployed frontier AI models. 
Feasibility of the proposed research. 
Relevance of the proposed research for the field of AI safety. 
Peer Review and the research team’s approach to engaging the broader research community to provide feedback on the research findings. 
Technical qualifications of the research team, and relevance for the domain under which the application is submitted. 
Ethics of the research, and the specific safety protocols that will be employed to manage immediate and long-term risk implications of the research findings. 
Equity will be considered as a metric of how the research project will support advancing equity and diversity in the field. 
Accessibility of the research findings to promote transparency in the field. Researchers should also consider potential risk or harm that could result from research findings, and provide justification for limiting access of the research findings.

 

 

→ Submit your proposals for Biosecurity here
by January 20, 2025

AI AGENT EVALUATION RFP

Schedule

Application period December 16, 2024 – January 31, 2025
Question and Answer period                                                                                                                                                                                   The AISF will be accepting clarifying questions for the AI Agent Evaluation and Synthetic Content RFP here. Questions will be reviewed and answered on a rolling basis, with responses posted to the AISF website starting January 6, 2025.
Eligibility and Review period January 31, 2025 – March 3, 2025
Award Notifications April 2025

 

Update on Submitted Questions

The AISF team is currently reviewing your clarifying questions, and providing detailed responses on a rolling basis. You can access the answers here.

 

Eligibility

Applicants should carefully review qualification and eligibility requirements and AISF policies to ensure progress of their application.

The AISF makes grants for independent researchers affiliated with academic institutions, research institutions, NGOs, and social enterprises across the globe that aim to promote the safe and responsible development of frontier models by testing, evaluating, and/or addressing safety and security risks. The AISF seeks to fund research that accelerates the identification of threats posed by the development and use of AI to prevent widespread harm.

The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) imposes restrictions on services and transactions with individuals or entities located in countries subject to comprehensive U.S. sanctions. As a result, due to applicable US sanctions, the AISF is unable to award grants to the following countries: 

  • Cuba
  • Iran
  • North Korea
  • Syria
  • Russia
  • Regions of Ukraine: Crimea, Donetsk, Luhansk
  • Belarus 

Please note: The list of sanctioned countries and activities may change at any time without prior notice, in accordance with OFAC regulations. 

 

Evaluation Criteria

The AISF will engage support from third-party, technical reviewers to evaluate proposals that meet the eligibility requirements. Some proposals may be reviewed, in part, by representatives of the Firms if there is a need to verify feasibility of a proposal. Only relevant information to determine feasibility will be shared with industry partners.

All proposals will be reviewed based on 8 criteria. Specific considerations for AI Agent Evaluation and Synthetic Content are outlined in the RFP.

Impact to improve safety measures for deployed frontier AI models. 
Feasibility of the proposed research. 
Relevance of the proposed research for the field of AI safety. 
Peer Review and the research team’s approach to engaging the broader research community to provide feedback on the research findings. 
Technical qualifications of the research team, and relevance for the domain under which the application is submitted. 
Ethics of the research, and the specific safety protocols that will be employed to manage immediate and long-term risk implications of the research findings. 
Equity will be considered as a metric of how the research project will support advancing equity and diversity in the field. 
Accessibility of the research findings to promote transparency in the field. Researchers should also consider potential risk or harm that could result from research findings, and provide justification for limiting access of the research findings.

 

 

→ Submit your proposals for AI Agent Evaluation and Synthetic Content here by January 31, 2025

Cybersecurity RFP

Explore research priorities, eligibility and evaluation criteria for our cybersecurity grant. Deadline: January 20, 2025

Biosecurity RFP

Explore research priorities, eligibility and evaluation criteria for our biosecurity grant. Deadline: January 20, 2025

AI Agent Evaluation & Synthetic Content RFP

Explore research priorities, eligibility and evaluation criteria for our AI Agent Evaluation & Synthetic Content grant. Deadline: January 31, 2025