Why the Pentagon’s Disinformation Campaigns Crashed and Burned

Why the Pentagon’s Disinformation Campaigns Crashed and Burned

A look at recent covert online operations suggests their effectiveness in influencing public opinion has been greatly exaggerated.

 

The Pentagon announced this week that it will conduct a full-scale evaluation of its psychological operations capabilities, following revelations that it has been conducting covert online disinformation campaigns. Alongside analyzing the legality of such operations, the review should seek to answer a more fundamental question: do these operations actually work?

The Washington Post, in its exclusive reporting on the story, states that roughly 150 U.S.-based social media accounts were identified and terminated by Facebook and Twitter as fakes over the past few years.  Many of these accounts are suspected of having been created and managed by Department of Defense agencies or contractors, and the overwhelming majority gained little to no traction in what would seem to have been their principal purpose—weakening support for U.S. adversaries by posting fictional accounts of atrocities and other falsehoods online.

 

This failure is striking. It runs counter to the commonly accepted view that using social media to sow societal discontent at scale through organized disinformation campaigns is one the most important cyber weapons in the arsenal of sophisticated state actors. It is a dogma that has arisen over the past dozen years, mostly in the context of Russia’s adventurism in its near-abroad. This phenomenon began with Russia’s 2008 incursion into Georgia, where it seeded online discourse with fabricated stories of Georgian aggression, and continued with the waves of disinformation narratives pushed by the Kremlin regarding the supposed illegitimacy of the Ukrainian nation leading up to its 2014 annexation of Crimea, a theme that continues unabated in the context of today’s war in Ukraine.

The crown jewel of Russia’s online disinformation is, of course, commonly accepted to have been its campaign to influence the 2016 U.S. general presidential election (remember the Internet Research Agency and its call center “trolls”). Six years later, there is a general consensus that Russia succeeded in sowing discontent across America in the lead-up to the election. Corners of the intelligence community, along with some particularly partisan pundits, assert the influence was material to the outcome of several electoral districts and, by extension, the election itself.

Perhaps owing to the innate mystique of the narrative, anchored as it is in deception and intrigue, the notion that these disinformation campaigns may not actually achieve meaningful outcomes for their perpetrators is often ignored. Moreover, the complex and continuously evolving nature of the social media platforms on which the campaigns are executed makes measuring their impact somewhat difficult.

Does Disinfo Move the Dial?

Social media platforms like Facebook and Twitter are assets of for-profit corporations and exist to generate revenue. Revenue is intrinsically tied to continuously increasing user traffic and content. In pursuit of this aim, functionality is ever-evolving. For example, video posts replace photo posts, which themselves replaced text posts. In tandem, a relentless focus on the user experience makes the ability to create and upload content ever easier. The result is platforms with ever-increasing reams and streams of data in continuously morphing forms.

The first step in identifying disinformation buried amongst this massive ebb and flow of data is to create repositories into which large subsets of the data can be herded and interrogated. Most data, however, is often unstructured—the string of characters in a text post is, from a data perspective, nearly perfectly random. So, the most logical data repository is one that allows for a wide range of data types, such as the “data lakes” concept. As data is herded into these repositories, uniform characteristics must be identified and tagged to make it easier to query. This begins with the metadata of each unique piece of data, such as the “date/time/stamp” of a photo, but must also include more qualitative characteristics, such as the “tone” of a text message.

This latter effort is where most efforts by the defense community to bound data fall short. Codifying qualitative traits is inexact, messy, and unscientific, and as such often ignored. Thus, when analysts attempt to understand whether a disinformation campaign has impacted “hearts and minds,” they remain blind to these critical data characteristics, relying instead on simple algorithmic tools. One such tool is counting the number of reposts or positive affirmations (“likes”) a given piece of data received over a period of time.

Archetypes are helpful in removing this limitation. Defense analysts should take a page from the playbooks of marketing and consumer behavior practitioners, who have used trial and error to produce commonly accepted “user sentiment” scores based on nuanced variables, including descriptor words in text posts (adjectives, adverbs), keyword combinations, and even sophisticated algorithms such as velocity scores correlating the timing and pacing of posts with emotions.

Isolating a large sampling of data from a popular social media platform like Facebook, ingesting it into a repository in a manner that allows for querying across a spectrum of characteristics—including qualitative ones—and then examining the behavior of known disinformation data within it will provide a comprehensive understanding of the disinformation’s impact.

 

It is not at all clear that even the most heavily orchestrated disinformation campaigns have ever meaningfully moved the dial on intended audience behavior. Given that the world’s largest and most sophisticated corporations spend hundreds of millions of dollars on social media campaigns to influence consumer behavior by even the tiniest of margins, one suspects that most, if not all, the “covert campaigns” splashed across news headlines are much ado about nothing.

Tom Robertson’s writing on the intersection of cybersecurity, technology, and great power competition has appeared in The National Interest, Global Affairs, First Monday, and elsewhere.

Image: Reuters.