The United States has had an interesting approach to new technology and challenges since its founding: whenever they reach a certain critical mass, Americans create an agency or authority to deal with them exclusively, both in terms of regulation, impact analysis, but also the needs and requirements of the U.S. government and thus the taxpayers.
For example, the Environmental Protection Agency (EPA) was founded in 1970, one of the first in the world, after the first concerns about the impact of human activities on the environment emerged in the 1950s. The corresponding German agency, by comparison, was not created until 1986. In addition to the nearly two dozen ministries, there are about four dozen agencies and hundreds of smaller departments. While agencies in other countries are often subject to major changes and are merged with or separated from others – as the history of the Austrian Ministry of the Environment shows, for example – in the United States they show a high degree of resistance. Once established, they remain committed to their tasks and provide continuity. This is not always an advantage, as was evident at the CDC during the pandemic, where the one task for which it was created – disease control – failed miserably in the United States. Or the Bureau of Alcohol, Tobacco, Firearms and Explosives, which for years has been limited in its operation and scope by the U.S. Congress and industry lobbying.
US AI Initiative
As of January 2021, the U.S. has a new agency that will deal exclusively with artificial intelligence. The National Artificial Intelligence Initiative was created by a resolution of the U.S. Congress in 2020 and is tasked with implementing the American AI strategy. Like China, the U.S. sees AI as a technology that is important for the country and in which the state must become involved in order to guarantee the prosperity of future generations.
Here, the approach differs enormously from Europe, where there are also basic outlines of an initial AI strategy. For example, the mission statement of the NAII, which reads as follows
providing for a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security. The mission of the National AI Initiative is to ensure continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.
Innovation, trusted AI, education and training, infrastructure, applications, and international cooperation are defined as strategic pillars of the AI authority.
EU Digital Strategy
If you compare these claims and goals with the ones that the EU has presented as a draft, they hardly differ at first glance. According to the European digital strategy, AI should find a place in the EU to develop and be of benefit to people and economic activity.
However, the first concrete proposals of the EU Commission already show a clearly different picture. As the first elaborated proposal, it only addresses trust in AI, as well as the security and risks of the technology. The European proposal for a legal framework clearly shows this:
The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard. This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.
Not a single word refers to the opportunities and possibilities, and the legal framework is defined in this way.
While the U.S. wants to enable and empower the development and application of AI by all means for the benefit of people and the economy and retain the leading role, the EU wants to rein in AI first while setting the “global gold standard” of AI regulation. Another key component of the U.S. AI initiative is also the participation of several Bunudes authorities and agencies in it. AI is explicitly seen as technology to be used in these federal bodies. This also sends a very clear signal that every authority, every agency should think about the use of AI in its own environment. The EU itself does not waste a word on this in its own digital strategy.
Two Speeds
Thus, several fundamental differences can be identified between the U.S. and European approaches, which will result in completely different dynamics: while the U.S. will pick up speed once again, European countries will add more obstacles.
USA | Europe |
Enabling opportunities | Risk avoidance and hazard minimization |
Own authority | A component of the digital strategy |
Technology for use by state agencies | Technology to be regulated |