Evaluating the Effectiveness of Artificial Intelligence Systems in Intelligence Analysis

Daniel Ish, Jared Ettinger, Christopher Ferris

ResearchPublished Aug 26, 2021

The U.S. military and intelligence community have shown interest in developing and deploying artificial intelligence (AI) systems to support intelligence analysis, both as an opportunity to leverage new technology and as a solution for an ever-proliferating data glut. However, deploying AI systems in a national security context requires the ability to measure how well those systems will perform in the context of their mission.

To address this issue, the authors begin by introducing a taxonomy of the roles that AI systems can play in supporting intelligence—namely, automated analysis, collection support, evaluation support, and information prioritization—and provide qualitative analyses of the drivers of the impact of system performance for each of these categories.

The authors then single out information prioritization systems, which direct intelligence analysts' attention to useful information and allow them to pass over information that is not useful to them, for quantitative analysis. Developing a simple mathematical model that captures the consequences of errors on the part of such systems, the authors show that their efficacy depends not just on the properties of the system but also on how the system is used. Through this exercise, the authors show how both the calculated impact of an AI system and the metrics used to predict it can be used to characterize the system's performance in a way that can help decisionmakers understand its actual value to the intelligence mission.

Key Findings

Using metrics not matched to actual priorities obscures system performance and impedes informed choice of the optimal system

  • Metric choice should take place before the system is built and be guided by attempts to estimate the real impact of system deployment.

Effectiveness, and therefore the metrics that measure it, can depend not just on system properties but also on how the system is used

  • A key consideration for decisionmakers is the amount of resources devoted to the mission outside those devoted to building the system.

Recommendations

  • Begin with the right metrics. This requires having a detailed understanding of the way an AI system will be used and choosing metrics that reflect success with respect to this utilization.
  • Reevaluate (and retune) regularly. Because the world around the system continues to evolve after deployment, system evaluation must continue as a portion of regular maintenance.
  • Speak the language. System designers have a well-established set of metrics for capturing the performance of AI systems, and being conversant in these traditional metrics will ease communication with experts during the process of designing a new system or maintaining an existing one.
  • Conduct further research into methods of evaluating AI system effectiveness.

Order a Print Copy

Format
Paperback
Page count
108 pages
List Price
$23.00
Buy link
Add to Cart

Document Details

  • Availability: Available
  • Year: 2021
  • Print Format: Paperback
  • Paperback Pages: 108
  • Paperback Price: $23.00
  • Paperback ISBN/EAN: 978-1-9774-0725-2
  • DOI: https://doi.org/10.7249/RR-A464-1
  • Document Number: RR-A464-1

Citation

RAND Style Manual

Ish, Daniel, Jared Ettinger, and Christopher Ferris, Evaluating the Effectiveness of Artificial Intelligence Systems in Intelligence Analysis, RAND Corporation, RR-A464-1, 2021. As of April 8, 2025: https://www.rand.org/pubs/research_reports/RRA464-1.html

Chicago Manual of Style

Ish, Daniel, Jared Ettinger, and Christopher Ferris, Evaluating the Effectiveness of Artificial Intelligence Systems in Intelligence Analysis. Santa Monica, CA: RAND Corporation, 2021. https://www.rand.org/pubs/research_reports/RRA464-1.html. Also available in print form.
BibTeX RIS

This research was sponsored by the Office of the Secretary of Defense and conducted within the Cyber and Intelligence Policy Center of the RAND National Security Research Division (NSRD).

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.