The Operational Risks of AI in Large-Scale Biological Attacks
Results of a Red-Team Study
ResearchPublished Jan 25, 2024
The authors of this report share final results of a study of potential risks of artificial intelligence (AI) in the context of biological weapon attacks. The authors sought to identify potential risks posed by AI misuse, generate policy insights to mitigate any risks, and contribute to responsible AI development. The findings indicate that using the existing generation of large language models did not measurably change the risk of such an attack.
Results of a Red-Team Study
ResearchPublished Jan 25, 2024
The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including concern regarding the potential development of biological weapons. This potential application of AI raises particular concerns because it is accessible to nonstate actors and individuals. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations.
In this report, the authors share final results of a study of the potential risks of using large language models (LLMs) in the context of biological weapon attacks. They conducted an expert exercise in which teams of researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack; some teams had access to an LLM along with the internet, and others were provided only access to the internet. The authors sought to identify potential risks posed by LLM misuse, generate policy insights to mitigate any risks, and contribute to responsible LLM development. The findings indicate that using the existing generation of LLMs did not measurably change the operational risk of such an attack.
Funding for this research was provided by gifts from RAND supporters and income from operations. The research was conducted by the Technology and Security Policy Center within RAND Global and Emerging Risks.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.