How to Strengthen America’s Artificial Intelligence Innovation
The United States' Artificial Intelligence Strategic Plan should focus on enabling a wide range of actors to play a role in strengthening American AI innovation.
Rapidly developing artificial intelligence (AI) technology is becoming increasingly critical for innovation and economic growth. To secure American leadership and competitiveness in this emerging field, policymakers should create an innovation-friendly environment for AI research. To do so, federal authorities should identify ways to engage the private sector and research institutions.
The National AI Research and Development (R&D) Strategic Plan, which will soon be updated by the Office of Science and Technology Policy (OSTP) and the National Science and Technology Council (NSTC), presents such an opportunity. However, the AI Strategic Plan needs several updates to allow the private sector and academic institutions to become more involved in developing AI technologies.
First, the OSTP should propose the creation of a federal AI regulatory sandbox to allow companies and research institutions to test innovative AI systems for a limited time. An AI sandbox would not only benefit consumers and participating companies; it would also enable regulators to gain first-hand insights into emerging AI systems and help craft market-friendly regulatory frameworks and technical standards. Regulators could also create sandbox programs to target innovation on specific issues—such as human-machine interaction and probabilistic reasoning—that the AI Strategic Plan identifies as priority areas in need of further research.
Second, the updated AI strategy should outline concrete steps to publish high-quality data sets using the vast amount of non-sensitive and non-personally identifiable data that the federal government possesses. AI developers need high-quality data sets on which AI systems can be trained, but the lack of access to these data sets remains a significant challenge for developing novel AI technologies, especially for startups and businesses without the resources of “big tech” companies. The costs associated with creating, cleaning, and preparing such data sets are too high for many businesses and academic institutions. For example, AlphaGo, a software produced by Google subsidiary DeepMind, made headlines in March 2016 when it defeated the human champion of a Chinese strategy game. More than $25 million was spent on hardware alone to train data sets for this program.
Recognizing this challenge, the AI Strategic Plan recommended the development of shared public data sets, but progress in this area appears to be slow. Under the 1974 Privacy Act, the U.S. government has not created a central data repository, which is important due to the privacy and cybersecurity risks that such a repository of sensitive information would pose. However, different U.S. agencies have created a wide range of non-personally identifiable and non-sensitive data sets intended for public use. Two notable examples are the National Oceanic and Atmospheric Administration’s climate data and NASA’s non-confidential space-related data. Making such data readily available to the public can promote AI innovation in weather forecasting, transportation, astronomy, and other underexplored subjects.
Therefore, the AI strategy should propose a framework that enables the OSTP and the NSTC to work with government agencies in order to ensure that non-sensitive and non-personally identifiable data—intended for public use—are made available in a format suitable for AI research by the private sector and research institutions. To that end, the OSTP and the NSTC could use the federal government’s existing FedRAMP classification of different data types to decide which data should be included in such data sets.
Finally, the AI Strategic Plan would benefit from a closer examination of other countries’ AI R&D strategies. While policymakers should exercise caution in making international comparisons, awareness of these broader trends can help the United States capitalize on different countries’ successes and avoid their regulatory mistakes. For example, the British and French governments recently spearheaded initiatives to promote high-level interdisciplinary AI research in multiple disciplines. Likewise, the Chinese government has launched similar initiatives to encourage cross-disciplinary academic research at the intersection of artificial intelligence, economics, psychology, and other disciplines. Studying and evaluating other countries’ approaches could provide American policymakers insights into which existing R&D resources should be devoted to interdisciplinary AI projects.
To maximize the benefit of this comparative approach, the AI Strategic Plan should propose mechanisms to conduct annual reviews of the global AI research and regulatory landscape and evaluations of its successes and failures.
Ultimately, due to AI’s general-purpose nature and its diffusion across the economy, the AI Strategic Plan should focus on enabling a wide range of actors, from startups to academic and financial institutions, to play a role in strengthening American AI innovation. An innovation-friendly research environment and an adaptable, light-touch regulatory approach are vital to secure America’s global economic competitiveness and technological innovation in artificial intelligence.
Ryan Nabil is a Research Fellow at the Competitive Enterprise Institute in Washington, DC.
Image: Flickr/U.S. Air Force.