Artificial General Intelligence's Five Hard National Security Problems
Expert InsightsPublished Mar 25, 2025
Expert InsightsPublished Mar 25, 2025
Testimony presented before the U.S. Senate Committee on Armed Services, Cybersecurity Subcommittee on March 25, 2025.
Jim Mitre
Vice President and Director, RAND Global and Emerging Risks
What would the U.S. government do if in the next few years, a leading AI company announced that its forthcoming model had the ability to produce the equivalent of one million computer programmers, as capable as the top 1% of human programmers at the touch of a button? The national security implications are substantial and could cause a significant disruption of the current cyber offense-defense balance.
Our work has revealed that AGI presents five hard national security problems.
First, AGI might enable a significant first-mover advantage via the sudden emergence of a decisive wonder weapon: for example, a capability so proficient at identifying and exploiting vulnerabilities in enemy cyber defenses that it provides what might be called a splendid first cyber strike.
Second, AGI might cause a systemic shift in the instruments of national power that alters the balance of global power. As the U.S., allied, and rival militaries establish access to AGI and adopt it at scale, it could upend military balances by affecting key building blocks of military competition, such as hiders versus finders, precision versus mass, or centralized versus decentralized command and control.
Third, AGI might serve as a "malicious mentor" that explains and contextualizes the specific steps that non-experts can take to develop dangerous weapons.
Fourth, AGI might achieve enough autonomy and behave with enough agency to be considered an independent actor on the global stage. Consider an AGI with advanced computer programming abilities that is able to "break out of the box" and engage with the world across cyberspace.
Fifth, the pursuit of AGI could foster a period of instability as nations and corporations race to achieve dominance in this transformative technology. This competition might lead to heightened tensions, reminiscent of the nuclear arms race, such that the quest for superiority risks triggering rather than deterring conflict.
As the U.S. Department of Defense embarks on developing the National Defense Strategy, it will have to grapple with how advanced AI will affect cyber, along with all other domains.
It's helpful for the U.S. government to really understand what the current state of the technology is and make sure that folks within the government, particularly those that are working in the national security community, really understand what's happening with the technology. Because one of the challenges with this technology is that it's not being developed by government. The second thing that government should be doing here is really looking for applications in the national security context. What are the specific use cases that can be applied? What are potential pathways to a wonder weapon or ways in which it can be highly advantageous in a military competition? That's critical to do. And that means having the AI in an environment where you've got sufficient compute, where you've got the right networks, etc. Where you can actively experiment with it and get the technology in the hands of the operators to play around with it. The third thing is preparing for contingencies. There's a wide range of possible things that could happen: a loss of control scenario, for example. Areas where there is technological surprise, and the Chinese get ahead. What would the U.S. government do in such contingencies? We should think that through in advance and have plans ready to address it.
This publication is part of the RAND testimony series. RAND testimonies record testimony presented by RAND associates to federal, state, or local legislative committees; government-appointed commissions and panels; and private review and oversight bodies.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.