Press Release — BRSL Welcomes Next Frontier Seminar
UC Berkeley Risk and Security Lab and the Council on Strategic Risks Launch Fellowship to Drive Leadership at the Intersection of AI and Nuclear
February 3, 2026
Berkeley, CA and Washington, DC – The Berkeley Risk and Security Lab (BRSL) and the Council on Strategic Risks (CSR) are delighted to announce the inaugural AIxNuclear Research Fellowship aimed at developing a pipeline of experts fluent in both nuclear and AI governance and security issues, with support from Longview Philanthropy.
Despite growing risks across the AI and nuclear landscapes, there remains a lack of talent that understands the technical and policy intricacies of each technology, their intersecting risks, governance challenges, and trajectory of development. This project seeks to develop expert talent and establish pathways to leadership by providing seven fellows with 10 months of mentored research, networking, professional development opportunities, and guidance.
We are excited to announce the following fellows were selected for this inaugural cohort:
- Brandon Brown, Director, Transnational Threats Policy, US Government
- Rosemarie Frost, Principal Policy Analyst, SAIC
- Ted Fujimoto, Research Scientist, Pacific Northwest National Laboratory
- Garrett Hinck, PhD Candidate, Columbia University
- Eliana Johns, Senior Research Associate, Federation of American Scientists
- Barret Schlegelmilch, Technical Program Manager, Anthropic
- Parth V Shah, Principal Radio Frequency Engineer, Northrop Grumman
Addressing catastrophic risks across the AI and nuclear landscape eventually comes down to shaping and communicating with the relevant government stakeholders. With Longview’s support, we are excited to have the unique opportunity to establish a much-needed talent pipeline for experts with deep knowledge of the technical and policy intricacies, risks, and governance challenges at this intersection. This will enable the US government to strategically plan and reduce risks around these most consequential technologies.
– Jessica Rogers (Senior Fellow, Nolan Center, Council on Strategic Risks), Leah Walker (Executive Director, Berkeley Risk and Security Lab), Prof. Andrew Reddie (Faculty Director, Berkeley Risk and Security Lab).
The fellowship kicked off in November 2025 with a trip to the Bay Area that included a 2-day workshop on AI and early warning, a field trip to Lawrence Livermore National Laboratory, a nuclear nonproliferation roundtable with the Carnegie Corporation of New York, and an international AI governance roundtable with the Geneva-based Simon Institute from Geneva.
The fellowship will continue by building the fellows’ research credentials through conducting unclassified, open-source research, combined with key interviews with academia, government, and industry stakeholders on a research question at the intersection of AI and strategic stability. Regular guidance and mentoring from BRSL and CSR experts will guide this research. The fellows will also participate in professional development activities, including instructional workshops on policy writing and delivering policy briefings. Trips and research exchanges will help the fellows build a robust network with both the AI and nuclear security and policy communities, providing fellows with a foundation in these communities that will serve them far beyond their time in the program.
The program will conclude with a trip to Washington, DC, in Spring of 2026, so fellows can interact with different offices, bureaus, and departments critical to AI and nuclear policymaking.
BRSL and CSR representatives are available for comment upon request.