New Insights from the RAND National Security Research Division  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
 
 

National Security Up Front

New Insights from the RAND National Security Research Division

December 16, 2024

 

Dr. Li Bicheng, or How China Learned to Stop Worrying and Love Social Media Manipulation

by Nathan Beauchamp-Mustafaga, Kieran Green, William Marcellino, Sale Lilly, and Jackson Smith

Detail from the 'The Sciences of the Artificial' lecture poster, by Dietmar Winkler/MIT

Detail from the 'The Sciences of the Artificial' lecture poster, 1960s, by Dietmar Winkler/MIT

 

China is leveraging social media and other platforms for overt propaganda and covert influence operations. The goal: influence domestic and foreign public opinion in Beijing's favor. While China's results have been limited so far, the rise of generative artificial intelligence (AI) could dramatically improve China's capabilities, posing a serious threat to democracies across the globe.

To shed light on China's evolving use of social media manipulation, recent RAND analysis drilled down on the work of Dr. Li Bicheng, a career Chinese military-affiliated researcher and one of China's leading military experts on generative AI. Li is also an apparent advocate for using AI to run large-scale networks of social bots to influence foreign public opinion.

The United States and other democracies should take action to prepare for China's AI-driven social media manipulation—and limit its impacts. This includes redoubling efforts to combat fake social media accounts; promoting media literacy and ways to distinguish authentic content online; and increasing diplomatic coordination, especially with Taiwan, to share best practices for countering China's influence operations.

 

What are the advantages of using AI to manipulate social media?

 

Prior RAND work characterized the implications of generative AI for social media manipulation. In the early years of social media manipulation—generation 1.0—basic computer programming leveraged semiautomated bots that posted human-generated, nontailored content. Social media manipulation 2.0 included low-quality manipulated videos, limited computer-generated content with limited scale, and some distribution by procedural bots. Current social media manipulation 3.0—using AI—dramatically increases the plausibility of the messenger, blurring the line between reality and fabrication. A primary concern is astroturfing, a tactic designed to create the illusion of widespread social consensus on specific issues.

 

What AI-based social media manipulation initiatives has China already undertaken?

 

There is evidence that Chinese military researchers are already adopting open-source U.S. large language models (LLMs), such as Meta's Llama model for some military purposes, and some Chinese military researchers have explicitly expressed interest in using LLMs for social media manipulation. This evidence aligns with public reporting by Microsoft and OpenAI earlier this year that Chinese government-affiliated actors have already been deploying generative AI content. Our analysis of Li Bicheng's scholarly output creates an opportunity to look deeper into China's evolving embrace and deployment of AI and LLMs in support of influence operations.

 

How should the United States and other global democracies prepare for China's AI-driven social media manipulation?

 

Early experimentation with generative AI for social media manipulation highlights the low bar to adoption and high potential for performance improvement. The United States and other global democracies should prepare for AI-driven social media manipulation by pursuing these four lines of effort:

  • Adopt risk-reduction measures. The United States should establish protocols to identify and mitigate the risks associated with AI-driven manipulation of the information environment.
  • Promote media literacy and government trustworthiness. The United States and allies should enhance public understanding of media trustworthiness, ensuring citizens critically evaluate the media content they consume.
  • Increase public reporting of Chinese activities. Fostering more public reporting on Chinese activities in the information environment will support efforts to recognize and respond to manipulation efforts.
  • Strengthen diplomatic coordination. The United States can work closely with allies and partners to develop a unified response to the challenges of generative AI's increased use in information manipulation.

In addition, the U.S. government should conduct an independent, comprehensive evaluation of its own information efforts and ensure that the benefits outweigh the costs. Washington should also consider engaging with Beijing to discuss restricting AI-driven influence operations.

Li Bicheng's research offers a unique entry point for us to understand China's growing whole-of-government approach to public opinion manipulation. Our analysis indicates that the Chinese military and the broader Party-state apparatus are well-positioned to operationalize social media manipulation tactics. Generative AI appears poised to reshape the landscape of influence operations; it is imperative that democracies take immediate steps to prepare for an increasingly complex information environment.

 

This summary is derived from Dr. Li Bicheng, or How China Learned to Stop Worrying and Love Social Media Manipulation: Insights Into Chinese Use of Generative AI and Social Bots from the Career of a PLA Researcher (RR-A2679-1) by Nathan Beauchamp-Mustafaga, Kieran Green, William Marcellino, Sale Lilly, and Jackson Smith.

For more information about this study, reply to this message and we'll connect you to the authors.

Want to stop receiving these messages? Reply to this message with "Unsubscribe" in the subject line.