The Operational Risks of AI in Large-Scale Biological Attacks

Results of a Red-Team Study

Christopher A. Mouton, Caleb Lucas, Ella Guest

ResearchPublished Jan 25, 2024

The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including concern regarding the potential development of biological weapons. This potential application of AI raises particular concerns because it is accessible to nonstate actors and individuals. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations.

In this report, the authors share final results of a study of the potential risks of using large language models (LLMs) in the context of biological weapon attacks. They conducted an expert exercise in which teams of researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack; some teams had access to an LLM along with the internet, and others were provided only access to the internet. The authors sought to identify potential risks posed by LLM misuse, generate policy insights to mitigate any risks, and contribute to responsible LLM development. The findings indicate that using the existing generation of LLMs did not measurably change the operational risk of such an attack.

Key Findings

  • This research involving multiple LLMs indicates that biological weapon attack planning currently lies beyond the capability frontier of LLMs as assistive tools. The authors found no statistically significant difference in the viability of plans generated with or without LLM assistance.
  • This research did not measure the distance between the existing LLM capability frontier and the knowledge needed for biological weapon attack planning. Given the rapid evolution of AI, it is prudent to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning.
  • Although the authors identified what they term unfortunate outputs from LLMs (in the form of problematic responses to prompts), these outputs generally mirror information readily available on the internet, suggesting that LLMs do not substantially increase the risks associated with biological weapon attack planning.
  • To enhance possible future research, the authors would aim to increase the sensitivity of these tests by expanding the number of LLMs tested, involving more researchers, and removing unhelpful sources of variability in the testing process. Those efforts will help ensure a more accurate assessment of potential risks and offer a proactive way to manage the evolving measure-countermeasure dynamic.

Document Details

Citation

RAND Style Manual

Mouton, Christopher A., Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study, RAND Corporation, RR-A2977-2, 2024. As of April 30, 2025: https://www.rand.org/pubs/research_reports/RRA2977-2.html

Chicago Manual of Style

Mouton, Christopher A., Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_reports/RRA2977-2.html.
BibTeX RIS

Research conducted by

Funding for this research was provided by gifts from RAND supporters and income from operations. The research was conducted by the Technology and Security Policy Center within RAND Global and Emerging Risks.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.