The Threat of “AI Safety” to American AI Leadership

The Threat of “AI Safety” to American AI Leadership

Instead of harnessing the positive potential of AI, a new regime of rules and regulations mandated in the name of “AI Safety” actively threatens the technology’s promise. 

 

The story of America is full of success in scientific and technological development. American innovation has improved the quality of life for billions worldwide, fostering a new era of economic prosperity. On the backs of American ambition, humanity has left the Earth’s atmosphere, walked on the Moon, and will soon travel to Mars. Today’s digital world relies on the Internet, which is itself a product of American innovation and leadership. If you see progress, American technology is sure to be close by.

American innovation has once again delivered a “new” technology—Artificial Intelligence (AI)—that has captured the world’s attention. Cutting-edge AI-based systems, dependent on American intellectual property, capital, infrastructure, and science, have demonstrated to the world the positive transformative potential of AI. What was once a fringe area of academic inquiry has now been transformed into a major topic in the global conversation.

 

Unfortunately, instead of harnessing AI’s positive potential, a new regime of rules and regulations developed in response to positions advocated for by a small, influential, and well-financed lobbying effort in the name of “AI Safety” is actively preventing these benefits from manifesting.

This is not hyperbole. Leading outlets have investigated how “elite schools like Stanford became fixated on the AI apocalypse,” covered how a “billionaire-backed network of AI advisers took over Washington,” and discussed how the United Kingdom’s AI ambitions were being shaped by “Silicon Valley doomers.” Despite reporting that clearly outlines the outsized influence this group is having on AI policy, decision-makers are still embracing this ideology.

The results of this influence campaign are now bearing fruit. Heavily influenced by a shared fear of AI, new regulatory initiatives developed by the “AI Safety” lobby are now coming into effect. Such proposals include the creation of new AI Safety Institutes in the United States and the United Kingdom. They also include the newly released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which was influenced by an “AI Safety”-aligned think tank. These and other linked regulatory efforts intend to centralize and control the development and availability of AI-based systems, believing that this will provide higher levels of safety. This is a mistake.

As policymakers are bombarded with stories about AI’s potential to disrupt our democracy and electoral processes, AI’s potential to disrupt the labor market, or AI’s potential “existential” threat to our world, it is easy to understand why the AI Safety argument is finding support. While these fears currently dominate mainstream discussions, they are usually not grounded in reality; the case is usually the opposite.

Meta’s chief of global affairs, Nick Clegg, recently stated that not only does AI not appear to be significantly impacting or undermining democratic processes, but it is also assisting technology actors in their defense of free and fair elections. This finding is in line with academic scholarship that has also found that many concerns over AI’s ability to create misinformation are “overblown..”

While there are fears that AI may disrupt the labor market and automate jobs away, these, too, do not appear to be supported by any empirical evidence. Instead, AI seems to have a positive and empowering effect, especially for those at the lowest skill levels.

For the AI Safety community, the primary concern is the supposed existential risk posed by AI. They say this despite no evidence to date supporting the legitimacy of these fears. Furthermore, many leading AI scientists, such as Yann Lecunn, have highlighted this risk as “ridiculous.”

Though not directly an existential risk, many have highlighted potential national security concerns emerging from AI developments, such as aiding in the creation of new bioweapons. Though concerning, much of these discussions ignore the fact that research shows these risks to be unlikely.

Relying on these fears, though supported by scant evidence, the AI Safety lobbying effort has been remarkably successful. It has achieved initial success in slowing down and limiting the development and diffusion of AI. The results of this are to the detriment of the American people. Perhaps even more worrying, however, is the risk posed to American technological leadership and, therefore, national security. As American AI innovation is stifled, our adversaries are allowed new opportunities to challenge or surpass our domestic capabilities. This must not be allowed to happen.

 

As AI continues to play a key role in defining our world and the way that it operates, it is essential to intervene and address the flawed and dangerous argumentation of the AI Safety lobby now before it is too late. We must ensure that the foundations for American AI innovation support its ability to create value and prosperity for American society.

Dr. Keegan McBride is a lecturer in AI, Government, and Policy at the Oxford Internet Institute, University of Oxford. His research explores how new and emerging disruptive technologies are transforming our understanding of the state, government, and power.

Image: Shutterstock.