Artificial Intelligence and the Rise of the Bots

Artificial Intelligence and the Rise of the Bots

An autonomous, artificially intelligent entity could task itself with inciting hatred to millions, even without being designed for that purpose.

 

The 1984 classic Terminator envisions a future taken over by warrior robots on a quest to eradicate humans. The story is a classic Frankesteinian tale of a hubris-fueled human invention getting out of hand and eventually turning against its creator. Recent developments in defense-related artificial intelligence (AI), such as autonomous drones, offer eerie reminders of the sci-fi plots of yesterday. And yet, while we may be tempted to ruminate on future robot armies, a more immediate AI threat is brewing right under our noses: heteromated terrorism, which occurs when humans engage in violent acts at the behest of AI technology.

Computer scientists Hamid Ekbia and Bonnie Nardi coined the term heteromation to describe “an extraction of economic value from low-cost or free labor in computer-mediated networks.” This refers to commercial technology that pushes critical tasks to the end users: humans. We can think of a person scanning products at the supermarket self-check-out line at the prompting of a machine, or someone uploading valuable personal information at Facebook’s request in order to help enrich the company with data that can be turned into ad revenue.

 

If automation involves machines doing the work of humans, heteromation refers to the way machines task humans with essential actions. As Ekbia and Nardi argue, heteromation turns “artificial intelligence on its head” by extracting free or low-cost labor from willing human participants.

Given the prevalence of so-called “bot” agitation on social media, and given the fact that social media serves as a primary platform for political recruitment, it is not a stretch to envision acts of heteromated terrorism becoming a trend. Filmmakers and authors have imagined the AI threat as a rational process, whereby non-human entities accumulate power and eventually move to systematically replace humans. But heteromated terrorism is something arguably more dangerous—and also more likely to occur in the near term, as it marries the power and reach of algorithmic content production and targeting with humans’ propensity for irrational outbursts.

For example, an individual could be radicalized at home through the relentless input of bot-generated content online, boosted and repeated ad nauseam into an echo chamber the target human considers credible. The human would then take up arms and fight for a cause of the network’s choosing.

Heteromated terrorism could manifest itself as a conscious enterprise, sponsored by a state or a sophisticated non-state actor that lets bots loose on unsuspecting social-media users. It could also emerge as the natural byproduct of AI engagement with today’s acerbic body politic—autonomous bots radicalizing themselves online and in turn becoming independent agents of subversion. Regardless of whether it is planned or spontaneous, the threat of heteromated terrorism is upon us.

The Two Radicals

To realize how likely the rise of heteromated terrorism is, we need only look back at the highly publicized radicalization of two political actors—one artificial and one human—which took place in 2016.

The first example is the seemingly innocuous radicalization of “Tay,” a Microsoft-created AI bot. Tay was released onto Twitter under the handle @TayTweets, and within twenty-four hours, users had successfully taught it to spew conspiracy theories and racist tweets. “I fucking hate feminists and they should die and burn in hell,” wrote Tay in a March 26 tweet. Four minutes later, Tay tweeted, “Hitler was right I hate the Jews.” Some of the outbursts were merely parroting what other users had written—but some of the comments were developed by the bot itself after picking up topics and lingo from the Twitterverse. Microsoft shut down Tay, but the lesson was clear: an autonomous, artificially intelligent entity could task itself with inciting hatred to millions, even though it had not been designed for that purpose.

The second radicalization example is that of Edgar Maddison Welch, a human from Salisbury, North Carolina. On December 4, 2016, Welch walked into Comet Ping Pong Pizza in Washington, DC armed with a semi-automatic rifle and on a mission to rescue children he thought were being held captive. Welch had been motivated by extremist media outlets like InfoWars and Breitbart, as well as by Twitter and message boards that had become increasingly prone to promoting conspiracy theories. According to Welch’s echo chamber, the pizza parlor in question was host to a pedophile ring in which then-presidential candidate Hillary Clinton was supposedly implicated. The accusation was blatantly untrue and absurd, but like many extreme viewpoints, it gained a gullible audience online.

Tay and Welch represent two sides of what a heteromated terror equation might look like. On one side would be an artificially intelligent network pushing extremism, and on the other, a willing human nudged to act on behalf of the network’s objectives. What makes heteromation such a pressing concern today is that we already know Welch’s radicalization was likely influenced, if not by sophisticated artificial intelligence, then at least by a cadre of relatively simple bots that helped amplify the so-called Pizzagate conspiracy theory over social media.

The Rise of the Bots

The extreme rhetoric that inspired Welch’s acts were fed by a loose network that involved the Internet Research Agency (IRA), a private troll farm based in St. Petersburg, Russia, which U.S. intelligence claims has ties to the Kremlin. (A troll farm is a group of humans paid to create and comment on antagonistic claims online.) The broader network also relies on activists and media personalities with an interest in disseminating combative views. The brilliance of IRA’s efforts is in the company’s ability to fuse with existing political and social dynamics of the countries it targets. One such country is the Ukraine, where Russia has an interest in supporting separatists. Another is the United States, where Russia wants to undermine a political establishment it has battled for two generations. The IRA doesn’t create the social schisms it exploits, but it has had some successes in harnessing and amplifying them via trolls and social bots—algorithms that help with both targeting and automation of propaganda messages.

Emilio Ferrara, et al., define a social bot as “a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.” Such bots have already become highly advanced and hard to distinguish from human actors. For example, bots can be selective about whom they follow (gauging by influence level and content type), when they post (to mimic human sleep and activity patterns), and they can even interact with human users.

Social bots additionally have the capacity to model themselves after people in a social network, not unlike the way machines took on the physical characteristics of humans in the Terminator franchise. By scouring for names, profile photos, and speech patterns from actual people who willingly offer up their identities to social-media companies, bots are successfully exploiting today’s highly heteromated environment.

Bots are difficult to detect, not only because of their human resemblance, but because human social- media activity has a high tolerance for interacting with unknown users, either in the form of replies on Twitter or Facebook users accepting friend requests from unknowns. Ferrara, et al., found that 20 percent of legitimate Facebook users accept friend requests from anyone—even folks they do not know. And 60 percent of users will accept friend requests from users with whom they share at least one common friend. This means that once a bot has successfully befriended one human user, the door opens to the infiltration of a broader social network.

Heteromated Terror

Heteromated terror need not be limited to right-wing extremism. In fact, the Islamic State’s social-media campaign has followed a model similar to that of Russia’s IRA. Both the IRA and the Islamic State rely on two separate stages for propagating messages: content creation (original work written by humans) and amplification (retweets and comments by humans and bots). The amplification phase lends itself well to automation, allowing a comment to be retweeted thousands of times in order to keep it from fading into Internet obscurity, as the vast majority of social-media posts do. But as Tay has shown, artificially intelligent bots can now take on the role of content creators, too.

More sophisticated social-bot campaigns would give the Islamic State and similar groups a significant boost in capabilities. Like the Russian IRA, Islamist radicals can assume a good percentage of their target audience would not be aware it is interacting with non-human accounts. Even algorithmic tools developed to detect bots have a mixed record, and Christian Grimme, et al., showed that today non-English accounts are particularly difficult for anti-bot technology to flag. This means the use of Arabic, French, Urdu and other languages found in the diverse Muslim world will be attractive vehicles for heteromating Islamist terrorism.

A Campaign without a General

As artificial intelligence further develops on social-media platforms, we can imagine heteromated terrorism being exponentially more difficult to combat than anything we have fought to this day. That is because the extremist signal will not need to originate from a particular group or state. Instead, autonomous AI bots will learn from the already existing cesspool of extremist propaganda on the Internet and build upon it independently of human direction. For states fighting terror, there will be no targets to bomb or countries to sanction. The ever-evolving enemy will live online.