AI Favors Autocracy, But Democracies Can Still Fight Back

AI Favors Autocracy, But Democracies Can Still Fight Back

AI will prove to be a powerful tool for autocrats, but that doesn't mean democracies can't gain the upper hand.

by Ciel Qi
 

As Ben Buchanan and Andrew Imbrie note in their recent book, “AI’s [artificial intelligence’s] new capabilities are both marvels and distractions.” The marvel versus distraction dichotomy is an interesting one: due to the two possible natures of AI, the question of whether advances in AI will favor autocracies or democracies has come to the forefront of the tech and global power debate. On the one hand, AI has the potential to tackle some of the world’s most challenging social problems, such as issues related to healthcare, the environment, and crisis response, leading some to believe that democracies will wield AI to create a future for human good. On the other hand, some fear AI-enabled surveillance, information campaigns, and cyber operations will empower existing tyrants and produce new ones, leading to a future where autocracies thrive and democracies struggle.

By examining how advances in AI capabilities in the near future could benefit autocracies and democracies, as well as how these advances could benefit both, I believe that AI is likely to favor autocracies in the near term, but under one necessary condition: that democracies are negligent in their response to autocracies’ destructive use of AI.

 

How AI Benefits Autocracies

The most prominent way AI benefits autocracies is by providing them with powerful tools to strengthen social control. For example, AI-enabled surveillance systems are already in operation in China. Footage captured by surveillance cameras there is processed by AI, allowing the government to identify persons of interest and even people of specific ethnicities based on their facial features. Inside the western province of Xinjiang, AI surveillance is used to monitor the local Uyghur ethnic group. In addition to surveillance cameras linked to the local police, some Uyghurs are required to install surveillance applications on their phones. Such AI-empowered applications can monitor what the police believe to be signs indicating a Uyghur is an extremist—for example, Quran verses and Arabic script in chat logs, images, and memes. More recently, Chinese police have tested AI emotion-detection software, where the AI system “is trained to detect and analyze even minute changes in facial expressions and skin pores.”

China doesn’t limit its use of AI monitoring to specific populations. It engages in extensive AI-enabled surveillance of all its 1.4 billion citizens which might seem shocking to residents of democratic countries. Each citizen’s outdoor physical actions are monitored by surveillance cameras and their digital activities by various applications they use. When these AI-powered tools detect behaviors such as bad driving, buying too many video games, or posting “fake news,” the person’s social credit score decreases, meaning that they could be banned from purchasing plane tickets or accessing fast internet. With advances in AI capabilities, it is likely that these surveillance systems will become more sophisticated over time and enable autocracies to consolidate firmer control over their people.

Some doubt AI clearly favors autocracies in the case of surveillance. They argue that these autocratic control systems will be too costly to run and indeed, not all autocracies can afford a comprehensive AI-enabled surveillance system like China’s. However, since China is not just a user but a major supplier of AI surveillance technology, it frequently pitches its product with soft loans to allow less affluent autocracies and semi-autocracies, including Laos, Mongolia, Uganda, and Uzbekistan, to access and benefit from the technology. At a certain point, the mere existence of such AI capabilities could trigger self-censorship among people which serves autocracies well. If people believe their physical and digital activities are constantly monitored and analyzed by AI and that they can be punished for certain behaviors, they will mimic the behaviors of those deemed “responsible” by their government to avoid punishment.

However, AI benefits autocracies by allowing them to conduct disinformation campaigns and enable greater manipulation of their domestic information environments while directing their disinformation capabilities outward to undermine democracies.

There are a few ways AI can supercharge autocracies’ ability to conduct disinformation campaigns. For example, since machine learning algorithms excel at utilizing massive data and detecting patterns that often go unobserved by humans, the application of AI could be more efficient at mapping and segmenting target audiences. Moreover, techniques such as natural language processing and generation and generative adversarial networks (known as GANs) could manufacture viral disinformation at scale, empower digital impersonation, and “enable social bots to mimic human online behavior and to troll humans with precisely tailored messages” when paired with human operators. Additionally, when autocracies target their domestic audiences, AI-powered censorship could enable them to instantaneously block any content deemed unfavorable, leaving no space for dissidents to speak up.

Some may argue that if AI can censor content that upsets autocratic leaders, democracies could use it to detect and filter disinformation and misinformation circulating on the internet. This is true, and democracies are already doing this. In 2019, U.S. and European AI startups were already looking into how to use natural language processing and human intelligence to identify and block online disinformation. More recently, big technology companies including Facebook have employed a combination of AI and human fact-checkers to remove Covid-19-related misinformation.

Nevertheless, disinformation and misinformation are still pervasive on social media. The reason is simple as China expert Yuen Yuen Ang described: it is generally easier to control and divide than it is to connect and build. While the spread of disinformation or misinformation only needs one click by an individual, countering that information requires a whole-of-society effort where “governments, technology platforms, AI researchers, the media, and individual information consumers each bear responsibility.” Moreover, when AI-empowered disinformation campaigns are widespread, it could become more technically difficult to combat them, as increasingly sophisticated language models and deepfakes will make distinguishing between real and fake an even more challenging task.

How AI Benefits Democracies (and Autocracies)

 

The section above showcases how AI’s new capabilities can be distractions and how autocracies can take advantage of this to achieve their objectives. Harkening back to the quote from Buchanan and Imbrie, though, AI’s new capabilities can also be “marvels” that democracies could utilize to benefit humankind. As mentioned previously, AI could contribute to solving some of the world’s most challenging social problems. In 2018, McKinsey identified AI use cases with ten domains of social good, including “equality and inclusion” and “security and justice.” Nevertheless, its report encouraged the cautious use of AI in these domains by laying out associated risks such as privacy breaches and the “black box” problem (that is, humans’ inability to fully know how AI makes its decisions). This suggests the way that democracies benefit from AI may not be as straightforward as the way autocracies do. Prudent and cautious application of AI is necessary for it to serve the common good.

Importantly, it is not the case that none of these AI use cases could benefit autocracies. In fact, in domains such as healthcare, environmental challenges, and disaster response, autocracies could also benefit if they implement the technology the right way. However, since autocracies operate through centralized power, their leaders tend to be more focused on retaining and strengthening their own positions instead of serving the good of their people. Consequently, autocracies are generally less interested in using AI to benefit the broader social good.

Another potential benefit of AI is the critical role it plays in countering cyberattacks (which it may already be doing). Detection of malicious code upon its arrival on a network is one of the fundamental challenges for cyber defense. As AI can examine massive amounts of data that most non-AI techniques cannot, it could substantially improve the speed of detection. AI could also be used to tackle another challenge: attribution. For example, unsupervised learning clustering algorithms could identify who launched a cyberattack. Additionally, natural language processing could enable linguistic attribution; the technique might identify telltale grammatical errors or linguistic habits.

While these AI-enabled cyber capabilities could deter cyberattacks backed by autocracies, benefiting democracies strategically, autocracies could also benefit from most of them. For example, while autocracies could use AI to detect malicious code in their networks, they could also use similar techniques to detect vulnerabilities in others’ computer systems and exploit them. And with AI-enabled attribution capabilities, autocracies could identify dissidents launching cyberattacks against government websites, removing a powerful tool for dissidents. This does not mean democracies will never use AI to conduct these destructive activities. However, since democratic leaders’ power is constrained by their populations, they tend to be more restrained by ethics and norms, and with decisions in democracies not made by one person—and controversial decisions more likely to receive opposition—democratic governments are less prone to use AI in destructive ways.

The Necessary Condition

The analysis so far seems to suggest advances in AI capabilities are likely to favor autocracies more than democracies. However, there is one necessary condition for this to be the case: that democracies are negligent in their response to autocracies’ destructive use of AI.

Some argue that democracies should lead the effort to establish and commit to AI norms. As the OECD AI Principles report observed, such an effort is beneficial since it aligns with the goals of the organization’s participants and also encourages those outside of the OECD to follow suit. However, this will hardly prevent autocracies from using AI for destructive purposes; an autocratic country could simply refuse to commit to any of these norms. Some recognize this problem and advocate for a Non-Proliferation Treaty-like agreement to ban certain uses of AI. But this approach also has flaws. Since verification of the development and use of certain AI capabilities is fairly difficult, it is also highly unlikely that autocratic countries would faithfully abide by such a treaty, even if they signed it.