Advertisement

Opinion | A nuclear war started by AI sounds like science fiction. It isn’t

Humans have perhaps five to 10 years before algorithms and plutonium could reduce us to skeletons and skulls

Reading Time:3 minutes
Why you can trust SCMP
0
Major powers are already planning to enhance missiles with AI to locate and instantly destroy moving targets, shifting the kill decision from humans to machines. Photo: Shutterstock
We are ignoring a spectre on the horizon. It is the spectre of a global nuclear war triggered by artificial intelligence (AI). UN Secretary General Antonio Guterres has warned of it. But so far nuclear-weapons states have avoided talks on this cataclysmic threat.

They argue that there is an informal consensus among the five biggest nuclear powers on the “human in the loop” principle. None of the five say they deploy AI in their nuclear-launch command systems. This is true but misleading.

They use AI for threat detection and target selection. AI-powered systems analyse vast amounts of data from sensors, satellites and radars in real time, analyse incoming missile attacks and recommend options for response. The human operators then cross-check the threat from different sources and decide whether to intercept the enemy missiles or launch retaliatory attacks. Currently, the response time available for human operators is 10 to 15 minutes. By 2030, it will be reduced to between five and seven minutes. Even though human decision-makers will make the final call, they will be swayed by the AI’s predictive analytics and prescriptions. AI may be the driving force behind launch decisions as early as the 2030s.

AI may be the driving force behind missile-launch decisions as early as the 2030s. Photo: Reuters
AI may be the driving force behind missile-launch decisions as early as the 2030s. Photo: Reuters

The problem is that AI is prone to errors. Threat-detection algorithms can indicate a missile attack where none exists. It could be due to a computer mistake, cyber intrusion or environmental factors that obscure the signals. Unless human operators can confirm the false alarm from other sources within two to three minutes, they may activate retaliatory strikes. The use of AI in many civilian functions such as crime prediction, facial recognition and cancer prognosis is known to have an error margin of 10 per cent. In nuclear early-warning systems, it could be around 5 per cent. As the precision of image-recognition algorithms improves over the next decade, this margin of error may decline to 1-2 per cent. But even a 1 per cent error margin could initiate a global nuclear war.

The risk will increase in the next two to three years as new agentic malware emerges, capable of worming its way past threat-detection systems. This malware will adapt to avoid detection, autonomously identify targets and automatically compromise them.

There were several close calls during the Cold War. In 1983, a Soviet satellite mistakenly detected five missiles launched by the United States. Stanislaw Petrov, an officer at Sepukhov-15 command centre, concluded that it was a false alarm and did not alert his superiors who could launch a counter-attack. In 1995, Olenegorsk radar station detected a missile attack off Norway’s coast. Russia’s strategic forces were placed on high alert and then-president Boris Yeltsin was handed the nuclear briefcase. He suspected it was a mistake and did not press the button. It turned out to be a scientific rocket. If AI had been used for determining the response in both situations, the outcome could have been disastrous.
The launch of what North Korean state media said was a new hypersonic missile system in January. Photo: KCNA via KNS/AFP
The launch of what North Korean state media said was a new hypersonic missile system in January. Photo: KCNA via KNS/AFP
Currently, hypersonic missiles use conventional automation rather than AI. They can travel at speeds of Mach 5 to Mach 25, evade radar detection and manoeuvre their flight path. Major powers are planning to enhance hypersonic missiles with AI to locate and instantly destroy moving targets, shifting the kill decision from humans to machines.
Advertisement