Call for Applications: AIxNuclear Research Fellowship

The Berkeley Risk and Security Lab in partnership with the Council on Strategic Risks invites early to mid-career professionals, with an AI or nuclear security and policy background to take part in the AIxNuclear Fellowship. The Fellowship is structured around three primary areas: research, professional development and network building.

The AIxNuclear Fellowship, a joint initiative from the Berkeley Risk and Security Lab (BRSL) and the Council on Strategic Risks (CSR), is designed to develop and support the next generation of experts who understand the complex and critical intersection between artificial intelligence (AI) and nuclear security. As the risks related to both AI and nuclear technologies continue to grow, there is an urgent need for professionals who can comprehend the technical, policy, and governance challenges posed by these fields, particularly when they converge. However, expertise in this niche is rare, and there is a notable gap in government engagement. The AIxNuclear Fellowship aims to fill this void by cultivating a talented cohort of early to mid-career professionals who can contribute to risk reduction, governance, and strategic planning concerning AI and nuclear systems. 

The program will select six fellows – three with AI expertise and three with nuclear security and policy backgrounds – who will undertake a 10-month, part-time fellowship combining mentored research, professional development, and networking opportunities. Fellows will be paired as one nuclear and one AI fellow to engage in unclassified, open-source research on AIxNuclear issues. Research questions may focus on the potential impacts of AI integration in military operations, how AI could affect nuclear strategy, and the feasibility of governance frameworks to manage these challenges. The paired fellows’ research will culminate in co-authored reports, which will be reviewed and published by BRSL and CSR at the end of the fellowship. Fellows will also participate in workshops and briefings on policy writing, governance structures, and emerging tech, helping them develop skills needed for future public service.

We thank Longview Philanthropy for their generous support in making this program happen.

Eligibility and Application Instructions

In order to be eligible for the AIxNuclear Fellowship, you must: 

  • Be a US citizen,
  • Come from either an AI or nuclear security and policy background, and
  • Have at least 5 years of full-time experience in your field or an advanced degree and 2 years of full-time experience. 

The deadline to apply is Midnight PDT on Tuesday, September 30, 2025. To apply, you must submit via the application portal, (1) an application form (the appication portal form), (2) a one-page statement of interest, (3) a current resume or curriculum vitae, and (4) a letter of recommendation. We intend to interview and then swiftly announce selection decisions by mid-October.

Applications are now closed.

Program Activities

The program will focus on three main areas: fellow research, professional development, networking, and community building. 

Research:

During the part-time fellowship, fellows will build their research credentials by conducting unclassified, open-source research, combined with key academia, government, and industry stakeholder interviews on a research question at the intersection of AI and nuclear security. Each pair of fellows will have regular meetings with BRSL, CSR, and their networks for brainstorming, mentoring, and guidance on policy writing. 

Examples of possible research questions include:

  • How might AI integration influence the probability of miscalculation or inadvertent escalation during a crisis involving nuclear powers?
  • How is AI being integrated into military operations that could spill over into nuclear weapons applications (e.g., conventional weapon targeting, early warning, and strategic decision-making)?
  • How does the application of AI alter the concepts of deterrence, escalation control, and crisis management among nuclear-armed states?
  • What are the primary risks associated with the military use of AI in nuclear-armed states, such as reduced decision time, automation bias, and asymmetric overreliance?
  • To what extent do current and proposed risk mitigation frameworks—such as the U.S. Political Declaration on Responsible Military Use of AI, the Group of Governmental Experts on Lethal Autonomous Weapon Systems, and AI Labs’ socio-technical measures—address the unique challenges posed by AI in nuclear strategy?
  • What are the limitations of existing US, other countries, and international governance measures in creating transparency, accountability, and crisis stability among adversarial states?
  • What unilateral measures (e.g., doctrine revision, military training, auditability of AI systems) could a nuclear-armed state adopt to improve the safety and stability of AI integration?
  • What feasible reciprocal or informal measures (such as AI codes of conduct or technical confidence-building mechanisms) could nuclear weapons states adopt to reduce AI-associated risks in the absence of formal arms control agreements?

This research will culminate in a co-authored report from each pair of fellows. They will be jointly published by BRSL and CSR in a report series for this cohort. 

Fellows will also be encouraged to work with the BRSL and CSR communications teams and project mentors to identify shorter writing opportunities to disseminate their research findings, including, but not limited to, journal articles, op-eds, and blog posts. 

Professional Development:

As the fellows build up their research credentials and familiarity with topics within the AIxNuclear nexus, they will also engage in professional development activities, preparing them for their next career steps. These activities will include:

  • A policy writing workshop to prepare fellows for writing, reviewing, and commenting on policy writing. 
  • Sessions highlighting the different government entities involved in nuclear weapons and AI responsibilities, to build familiarity with the government ecosystem of AI and strategic stability issues.
  • Briefings on emerging tech and strategic stability issues from the BRSL and CSR research teams, to provide fellows with further context and the latest research in the space.
  • Mentoring sessions with BRSL and CSR senior staff to advise on research, career planning, and post-fellowship opportunities. Fellows will also have the chance to engage with current and former government officials through both institutions, to better shape their understanding of governmental processes and ecosystems.
  • The opportunity to brief research findings to key stakeholders in government and civil society, allowing fellows to practice their research communication and policy briefing skills. 

Networking and Community Building:

Throughout the program, fellows will have the opportunity to grow their networks and engage with both the AI and nuclear security and policy communities. These connections, and a deeper understanding of these ecosystems, should serve fellows far beyond their time in the program.

Fellows’ Trip to the San Francisco Bay Area the first week of November: Fellows will travel to the Bay Area to engage with AI companies, national laboratories, defense technology companies, and academic researchers. The 4-day trip will provide the fellows a chance to immerse themselves in the Bay Area ecosystem, meet in person as a cohort for the first time, and build connections in Silicon Valley.

Fellows’ Trip to Washington, DC, in Spring: Fellows will travel to DC to engage with US and allied government entities, with a view to expanding their understanding and familiarity with relevant government ecosystems. The 4-day trip will also introduce the fellows to potential government employers.

Research Exchanges and Showcase: Fellows will have the opportunity to present their research findings in a webinar launch event for the series reports. Fellows will also participate in a research showcase, sharing their research findings with the BRSL and CSR teams and hearing from BRSL and CSR researchers about their latest work on nuclear and AI issues.

Career Support

At the core of the program is the idea of building a better pipeline for AIxNuclear experts to enter public service. As such, the latter part of the fellowship will focus on supporting interested fellows in transitioning to public service, drawing on the research credentials and connections they build throughout the fellowship. The DC trip, in particular, will expose fellows to relevant offices, bureaus, and departments, and help build networks around those potential future landing spots.

Program Calendar

September 4, 2025

Call for applications

October 1, 2025

End of application period, start of review period

Mid October 2025

Selection and notification of 3 nuclear and 3 AI fellows

November 2025 

Fellow onboarding

November 2025

Week-long Bay Area visit, focused on artificial intelligence

November 2025 – March 2026

Research phase for fellow projects, ending in research showcases with BRSL and CSR teams

March or April 2026

Week-long DC trip focused on nuclear policy

March – July 2026

Career support

April – June 2026

Report drafting phase for fellow projects

June/July 2026

Research report finalization

August 2026

Program wrap-up, report publication

August 30, 2026

Program end