Theoretical physicist Stephen Hawking, billionaire entrepreneur Elon Musk, and over 1,000 artificial intelligence (AI) researchers recently signed an open letter urging the United Nations and world governments to ban AI-based weapons before things go awry.
In their letter, experts wrote that autonomous weapons, which can “select and engage targets” without help from a human operator, have the potential of becoming the “Kalashnikovs of tomorrow” if countries don’t take the necessary steps to prevent a “global AI arms race.”
“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,”
researchers wrote in the letter.
They also believe that if a super power begins to invest in robotic warfare, other countries may soon follow the trend and an international arms race may soon become “inevitable.”
Both Hawking and Elon Musk warned humanity before against fully developed artificial intelligence. Prof. Hawking even said that a full-fledged AI technology has the potential of taking over humanity within a century.
Yet, now the duo is concerned that AI may be used to promote mass violence, which can reach unprecedented levels. Prof. Stuart Rusell, an AI researcher at the Berkley University, said that a global arms race may lead to the development of faster, smarter, and more lethal weapons of mass destruction that can be deployed against civilians by terrorists and dictatorial governments. This may result in countless casualties at very little financial cost to the terrorists, Prof. Rusell added.
The AI experts argued in the letter that an AI arms race is extremely hard to control because it is cost effective and requires no rare materials like nuclear weapons do. Plus, some human rights activists who signed the letter said that it is unethical to let a machine make life and death decisions on the battle field.
Bonnie Docherty from the Human Rights Watch recently noted that a gunman can be held responsible for his actions because the weapon or drone was just a tool to kill, but you cannot hold responsible a robot for killing a human.
Three years ago, Human Rights Watch experts argued that autonomous weapons cannot properly tell apart soldiers from civilians in the heat of battle, or accurately assess whether attacking civilians can provide a significant military advantage.
Last year, a report issued by the human rights watchdog Reprieve showed that the U.S. killed 1,147 civilians through drone strikes while trying to eliminate 41 terrorists. So, it is hard to tell whether a killer robot may be more moral when making such decisions. Docherty argued that moral decisions are best made by people not machines.
Image Source: Realize the Lies