Big Data and AI Can Defend Democracy—Or Destroy It

Big Data and AI Can Defend Democracy—Or Destroy It

Data analysis is like any other tool; its impact on our lives depends upon its owners’ intentions.

 

Today’s world is full of sensors, and the higher your nation-state is on the advanced-industrial food chain, the more likely it is that you carry a sensor on your person every minute of every day (and for many, even while asleep). That matters: the data collected by these sensors can be stored, analyzed, and weaponized. It can be stolen. It can be subpoenaed. And although most of today’s data collectors are for-profit corporations, there are dire risks alongside the potential for breakthroughs in areas such as medicine and global warming. The collection, analysis, storage, and theft of information about you have lethal implications; both for you as an individual and for all of us in terms of interstate war.

Data as Crude Oil?

 

In his 2018 book AI Superpowers, author and entrepreneur Kai-Fu Lee likened big data to the new crude oil and noted that insofar as the analogy holds, that would make the People’s Republic of China (PRC) the world’s data Saudi Arabia. The analogy is useful in that, like crude oil, data can be converted to many other valuable things. It can be used to train algorithms that run machines or make predictions, solve difficult problems, cure diseases, make water resilient crops, design safer and more fuel-efficient cars, and so on. But data collected on humans can also be used to oppress, kill, and torture.

In other words, the political, economic, and social implications of data analysis depend directly on the aim of that analysis. That aim might be trying to figure out whether I need a new pair of shoes or shorts for summer wear, or whether I’ll vote for a specific candidate.

In democratic states with relatively unregulated market economies, like the United States, we’ve moved into an era of what author Shoshana Zuboff calls “surveillance capitalism.” This begins by selling the idea that data analysis can be used to predict what we will buy, and therefore increase profits by taking the uncertainty out of the design, production, marketing, and sale of consumer goods. But Zuboff goes further: those data analyses have already reached a point where for-profit corporations are now able to deploy algorithms not only to predict what we want, but to shape our purchasing desires. And understand: while we may vaguely consent to the former, we have not been given the option of consenting to the latter.

The Anti-Democrats

It gets worse. In authoritarian states like Russia, the PRC, Iran, the Democratic People’s Republic of Korea (DPRK), and Venezuela, data analysis is primarily used to find and destroy political opposition. Here’s a bit of dialogue from Captain America: Winter Soldier (Anthony and Joe Russo, 2014) that brilliantly captures the idea. The setting is a rooftop where two of our heroes—Steve Rogers and Natasha Romanov—are interrogating Jasper Sitwell, a senior officer in the evil authoritarian HYDRA conspiracy about how a lethal high-tech orbital weapon, the Insight helicarrier, will choose its targets:

Sitwell: Zola's algorithm is a program… for choosing Insight's targets!

Rogers: What targets?

Sitwell: You! A TV anchor in Cairo, the Undersecretary of Defense, a high school valedictorian in Iowa city. Bruce Banner, Stephen Strange, anyone who's a threat to HYDRA! Now, or in the future.

Rogers: The Future? How could it know? [Sitwell laughs]

 

Sitwell: How could it not? The 21st century is a digital book. Zola taught HYDRA how to read it. [Rogers and Romanov look at him in confusion] Your bank records, medical histories, voting patterns, e-mails, phone calls, your damn SAT scores. Zola's algorithm evaluates people's past to predict their future.

Rogers: And what then?

Sitwell: Then the Insight helicarriers scratch people off the list, a few million at a time.

If algorithms can be designed and deployed to predict whether you’ll prefer Coke or Pepsi, in other words, why couldn’t they be designed and deployed to predict whether you’ll oppose an attack on the U.S. Constitution? Few people know that the original idea for what we might call “predictive policing” was invented by Reinhard Tristan Eugen Heydrich, who along with Heinrich Himmler effectively founded the Third Reich’s secret police and Order of the Death’s Head (SS). Heydrich’s rise within the Nazi party in the 1930s, and later in the SS, was in fact due to his effectiveness in collecting and analyzing data to identify present and future opposition to Adolf Hitler and the Nazi Party. Most of those Heydrich so identified ended up executed by firing squad, or sent to concentration camps to die more slowly.

The devastating implication is this: data analysis is like any other tool; its impact on our lives depends upon its owners’ intentions. It depends on the questions asked of it. If the intentions are for our benefit—and a lot of them are—then we want to support data collection and analysis and the refinement and deployment of the algorithms that can make living, loving, and working more secure, fair, fulfilling, and productive. It can be deployed by law enforcement to find and halt serial killers, human traffickers, and the sexual exploitation of children. But if the intentions shift to political oppression, as they already have in Russia, Turkey, Iran, the DPRK, Venezuela, and the PRC (in the PRC algorithms supported by the Party’s unrivaled data collection are said to be deployed to increase “social stability”), then the same data collected to identify your risk of prostate cancer or to sell you a better fitting pair of eyeglasses or jeans can be redeployed to find, arrest, imprison, or execute you for nothing more than opposing politicians who seek to abolish democracy and the rule of law.

And this is already happening, even in democracies such as the United States. In 2016, Cambridge Analytica was caught using Facebook data to target potential democratic voters identified by its algorithms as likely not to vote. In 2019, as I considered joining the White House cybersecurity policy team in Washington DC, a colleague already working on the team called to warn me: “Ivan, have you ever tweeted anything critical of the President?” My truthful answer, “yes”—though honestly, I’ve tweeted criticisms of several U.S. presidents—drew a rueful response: regardless of my background and qualifications, I need not apply.

In June of this year, the potential for harm from big-data-fueled artificial intelligence (AI) got much worse—lethal even; because those same oceans of data held in trust by companies like Alphabet, Amazon, Twitter, and Meta can now be subpoenaed by law enforcement in states that have, since June 24, outlawed abortion. Did this person travel out of state for work, for pleasure, to visit family, or to visit a medical facility that provides abortions (among other services)? Data-analysis-supported AI is terrific at pattern-matching of this sort (it’s already in use for early detection of breast cancer, for example).

Data Privacy, Data Security, and Interstate War?

This is why data privacy and data security are so critical; even as we continue to blithely surrender detailed information—much more detailed than anything available to Reinhard Heydrich—to private corporations who do their best, though not always successfully, to keep your private information safe from competitors or criminals.

I’ll close by connecting the dots between data privacy, AI, democracy, and interstate war. We’ve already seen that governments use big data they acquire on their own or by forcing companies with employees within their borders to surrender data to them. The more democratic the state, the more likely it is that these data will be used to support security and prosperity by fighting crime, rooting out corruption, and so on. The less democratic the state, however, the more likely it is that these same data will be deployed to harass, oppress, arrest, and even execute political opponents. Since 2016, Russia has effectively used U.S. social media to continue to deepen Americans’ political divisions, to the point where the United States appears to be on the brink of civil war.

And this is where interstate war comes in. It’s well known that the greater the number of democratic states—defined simply as states with mechanisms that enable their citizens to have a say in their own domestic and foreign policies—in a system of states, the lower the likelihood of interstate war. But misuse of detailed personal data from multiple sources not only makes in possible for authoritarian governments to destroy an opposition—as happened in Hong Kong from 2019–2020—but it enables conservative minorities within nominally democratic states to take control of the state; to shift a democratic state to an authoritarian one. This is happening worldwide, resulting in a steady increase in the number of states shifting from democratic, to weakly democratic, to weakly authoritarian, to full-on dictatorships (this was Russia’s trajectory after 1991). This means that interstate war—like Russia’s assault on Ukraine in February—is bound to become more common.

There is a solution to this problem, and it directly involves yes, governance. When markets can’t constrain themselves, governments need to step in and regulate the collection, use, transit, and storage of data. Europe is leading the way, and its efforts continue to face serious headwinds. But regulating the collection and use of big data and the algorithms they support is the only way we can enable the companies who collect and control our data to gain profits from innovations beneficial to humankind, while at the same time preventing present-day Heydrichs and Himmlers—and they are out there—from using our data to oppress and persecute those their authoritarian governments consider enemies of the state.