AI Red-Teaming Bootcamp

The Berkeley Risk and Security Lab invites early-career professionals, with expertise in CBRN, cybersecurity, machine learning, disinformation studies, or other relevant fields to AI safety to take part in UC Berkeley’s inaugural AI Red-Teaming Bootcamp class.

As commercial artificial intelligence models have become increasingly sophisticated, leading AI companies have turned to red teaming as a tool for improving model safety and identifying potential cases of misuse. As such, AI red teaming is increasingly becoming its own discipline within AI safety and security. To grow this discipline, and help train the next generation of AI safety experts, the Berkeley Risk and Security Lab created the AI Red-Teaming Bootcamp with support from Open Philanthropy.

The bootcamp is hosted on UC Berkeley’s campus and features instructors and lecturers from the private sector, government (including the national labs), academia, and civil society.

Participants thus get the opportunity to engage with these experts and understand their different approaches and priorities with regard to AI safety. Participants will also learn about the history of red-teaming, understand the current red-teaming landscape, and work through a series of red-teaming exercises. At the end of the week, we hope that all participants, both those outside and within the AI safety world, will be left with a better understanding of how AI systems work, how they fail, and how those failures can be detected.

Please contact cwagner@berkeley.edu with any questions. See below for more information about eligibility and application information. 

We thank Open Philanthropy for their support of this program. 

Eligibility

In order to be eligible for the AI Red-Teaming Bootcamp, you must: 

  • Be an early-career professional (PhD candidates, post-doctoral fellows, and professionals with 2-7 years of professional experience are eligible).
  • Have expertise in CBRN, cybersecurity, machine learning, disinformation studies, or other relevant fields to AI security.
  • Be available to participate in one week (six days) of full-day, in-person activities from July 27 (evening) through August 1.

BRSL will cover all travel and accommodation expenses unless prohibited by a participant’s organization.

Application

Interested individuals may apply at any time on the BRSL website once an application cycle has been opened for a bootcamp. To apply, you must submit:

  • Application Form
  • One-page statement about your background, reason for applying to the AI Red-Teaming Bootcamp, and how the bootcamp will benefit your career.
  • Current resume or curriculum vitae (two pages).

The application for the Summer 2025 bootcamp is currently open with materials due by March 24, 2025. To apply, please submit the Application for Admission with the required materials.

All applications received by the deadline will be reviewed. Late submissions will not be accepted. Admission decisions will be sent out by March/April 2025.

    Summer 2025 Bootcamp

    In Summer 2025, BRSL will host its inaugural AI Red-Teaming bootcamp. Participants are required to be available from July 27 to August 1 for one week of full-day, in-person workshops. The bootcamp will be hosted on UC Berkeley’s campus. 

    The following are key dates for the Summer 2025 AI Red-Teaming Bootcamp:

    • February 18 – Bootcamp announced and application opened on BRSL website
    • March 24 – Application deadline
    • April 15 – Admission decisions sent out
    • Early July – Agenda/Materials Sent to Participants
    • July 27 – August 1 – AI Red-Teaming Bootcamp

    Applications for the Summer 2025 bootcamp are due by March 24, 2025. To apply, please submit the Application for Admission with required materials.

    All applications received by the deadline will be reviewed. Late submissions will not be accepted. Admission decisions will be sent out by April 15, 2025.