Artificial intelligence has the potential to prevent the end of the world by as soon as 2040, all thanks to its 'nuclear deterrence capabilities', a new RAND Corporation paper has suggested.
"While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks," the RAND study says.
It adds that in the coming decades, artificial intelligence has the potential to "erode the condition of mutual assured destruction and undermine strategic stability" thanks to improved sensor technologies that could introduce the possibility that retaliatory forces, such as submarine and mobile missiles, could be targeted and destroyed.
Much of the early development of AI was done in support of military efforts or with military objectives in mind
"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said the paper's co-author and associate policy researcher at the RAND Corporation, Edward Geist. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."
RAND corporation believes that AI also could enhance strategic stability by improving accuracy in intelligence collection and analysis.
"While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation," it adds.
The researchers also claim that, given coming improvements, it is possible that future AI systems will develop capabilities that will make them less error-prone than their human alternatives and therefore be more stabilising in the long term.
The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history
"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," added RAND associate engineer, Andrew Lohn.
"There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."
The RAND researchers based their perspective on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security.
It's part of a bigger effort to envision a security challenges in the world in 2040, considering the effects of political, technological, social, and demographic trends that will shape those security challenges in just over 20 years' time.
HP and Centrica are the first industry partners to sign up to the government's new Code
New ice grows faster but is also more vulnerable to weather and wind
With a crackdown on cheats is coming in November, PUBG rushes to fix matchmaking problems introduced in Update #22
New material uses carbon dioxide from the air to repair and reinforce itself