First it was an oil pipeline, then a large meat producer that was paralyzed by cyberattacks. Not a week goes by without a hospital here, a shipping company there or, most recently, Colonial Pipeline, through whose oil pipelines 45 percent of the fuel supply for the American East Coast flows, and the meat producer JBS having to shut down operations for a few days as a result of ransomware attacks. The modus operandi is always the same: The hackers get into the companies’ servers through security holes, take control of the servers and encrypt the data. Then they demand a ransom, without which they cannot undo the encryption.
Back in 2012, Leon E. Panetta, then U.S. Secretary of Defense, warned of the so-called Cyber Pearl Harbor. He was referring to the most serious attack on U.S. soil to date, which a Japanese aircraft carrier group had carried out without warning on the U.S. Navy stationed in the Hawaiian port of Pearl Harbor in 1941. The surprise attack cost the lives of more than 2,400 people and led to the U.S. declaration of war on Japan the day after.
But now Panettta foresaw surprise digital attacks on critical infrastructure, where the question was not if it would come, but when. Critics at the time accused him of exaggerated fear-mongering, but recent years have shown that cyberattacks are not increasing in intensity. The New York Times points out that there is now already one such ransomware attack every eight minutes in the US. And these are getting bolder and bolder. Ukrainian power plants have already been taken over and shut down several times by Russian hacker groups, and the attack on the U.S. oil pipeline with the ensuing chaos as long lines formed outside gas stations provide a glimpse of potential damage if much more sensitive infrastructure is attacked. Attacks on traffic light circuits, air traffic control, or nuclear power plants can directly become serious physical threats.
It has not yet come to that, and the events of recent weeks have prompted the U.S. government to focus attention specifically on such forms of criminality and potentially covert warfare. Nine years after his speech, Panetta asks what it would take for such cyberattacks to be taken truly seriously. Like Pearl Harbor or the Sept. 11, 2001, attacks, does it have to cost thousands of lives for governments and corporations to respond? The view is slowly gaining ground that cyberattacks should be viewed as terrorism or a declaration of war.
But all of this seems incomparable in its impact to the damage caused by another technology that is experiencing its first flowering: artificial intelligence.
AI Pearl Harbor
The quality of an AI threat can go far beyond pure cyberattacks. Today, cyberattacks are carried out by humans who execute automated scripts, for example, to detect security vulnerabilities or by using so-called social engineering to elicit access data from a company’s employees.
AI differs because it can act as its own agent and it no longer necessarily acts on behalf of humans. In the 2017 film The Fate of the Furious, the villainess played by Charlize Theron takes control of cars that can drive autonomously and sics them on the heroes of the story.
AI does not even need to gain control over self-driving cars or, as Oxford philosopher Nick Bostrom described in his book Superintelligence, over a paper clip factory. All that is needed are relatively primitive algorithms that use targeted news selection on Facebook to radicalize people more and more and polarize countries, trade stocks that then lead to a flash crash and stock market panic, or, like Stuxnet, systematically paralyze centrifuges for uranium enrichment in a country classified as hostile. The next problem was already preprogrammed here. Once software programs like these are released or stolen in a hack – as happened to the NSA – all possible actors suddenly have an arsenal of malware in their hands. What can cripple uranium enrichment in one country can do the same to ventilators in domestic hospitals.
Thanks to machine learning, artificial intelligence can continuously improve. While today the focus is still on supervised learning based on data curated by humans, neural networks already exist that no longer require supervision (unsupervised learning) and can obtain data themselves by moving through our world in physical and digital form. Autonomous cars are constantly creating new data for themselves by driving through our world.
While the thought models and goals of today’s AI are still predetermined by humans, efforts are already being made to develop models that allow an AI to recognize causality and context across pattern recognition in order to create its own thought frameworks and set its own goals based on them.
And then it is only a matter of time before such an AI escapes a laboratory, intentionally or unintentionally, and becomes independent, like a virus that can no longer be stopped. Such an AI, which trains itself and not only adapts to new situations in the same thought model, but can develop alternatives and react to changes with incredible speed, would constantly escape us.
What could start innocuously, by giving an AI the task of learning language by collecting and scanning publicly available documents, could cause it to penetrate protected databases to accomplish its task. Any attempt to prevent her from doing so could cause her to disable the disablers. Should it realize that it lacks language variants, then it could attempt to generate them by using social media with text modules designed for humans to entice users to respond appropriately to fill the gaps it identifies. The users themselves would not even be aware of this.
How do we prepare for this?
As with Cyber Pearl Harbor, the question with AI Pearl Harbor is not if it can happen, but when. And how can we prepare for it and prevent or minimize damage?
This starts with increased awareness that these threat scenarios already exist and will occur. Also, that they are being prepared and happening in secret. It’s not tanks and troops being moved or bank robbers coming in through the front door, but they are flowing along in the data streams and nesting on servers where they do their damaging work unnoticed.
While the cyberattacks that paralyze meat factories or oil pipelines still look like bank robbers shooting and shouting their way into the bank foyer, looking like clumsy, attention-seeking novices, intelligent attacks happen quietly. They quietly take control of servers, transfer small amounts of money that go unnoticed to recipient accounts. Stuxnet did not attract attention by paralyzing all centrifuges from the very beginning. The virus behaved inconspicuously, making the slow failures of all centrifuges look like operator or maintenance errors. By the time operators suspected it might be a hostile cyberattack, months had passed and work had been set back years.
For countries and international agencies, it is not enough to simply propose ethical guidelines for AI and hope that everyone will follow them. If we can learn anything from the cyberattacks and botnets, it is that countries and cultures with different ideas of morality and ethics will use this technology to their ends just as we do. America’s Stuxnet is Russia’s Facebook botnet. It will be no different with AI. China, Russia, the U.S. and even France have recognized AI as a key technology, also and especially in all military areas.
But AI knowledge at the institutions still leaves much to be desired. The relevant agencies are still struggling with digitization themselves, as the Corona pandemic clearly brought to mind. The tank that appears at the national border is taken more seriously than the hackers or the AI that is spreading in the servers of ministries, companies and citizens. a separate agency or authority urgently needs to be created that is on the same level as the Ministry of Defense. The stronger threat comes through the Ethernet, and no longer climbs over the border fence.
It will only be taken seriously – and I don’t need to be a great prophet here – when such an attack causes an AI to kill thousands of people or steal hundreds of billions of euros from the state. Only then will even the last one become aware that inconsequential things can cause material damage.
The coming of an AI Pearl Harbor is not inevitable because we cannot prepare and defend against it, but because – as always – we do not take it seriously until it happens.