Over 50 top artificial intelligence researchers on Wednesday announced a boycott of KAIST, South Korea’s top university, after it opened what they called an AI weapons lab with one of South Korea’s largest companies.
The researchers, based in 30 countries, said they would refrain from visiting KAIST, hosting visitors from the university, or cooperating with its research programmes until it pledged to refrain from developing AI weapons without “meaningful human control”.
KAIST opened the new centre in February, with Hanwha Systems, one of two South Korean makers of cluster munitions, saying they would focus on using AI for command and control systems, navigation for large unmanned undersea vehicles, smart aircraft training and tracking and recognition of objects.
No comment was immediately available from the university or the company outside of regular business hours about the boycott or the work of their Research Centre for the Convergence of National Defence and Artificial Intelligence.
However, in a statement quoted by Britain’s Times Higher Education magazine website on Wednesday, KAIST president Sung-Chul Shin said he was “saddened” by the threatened boycott and denied the institution had any intention to work on lethal autonomous weapons systems.
“The centre aims to develop algorithms on efficient logistical systems, unmanned navigation (and an) aviation training system,” he was quoted as saying. “KAIST will be responsible for educating the researchers and providing consultation.”
Hanwha says on its website: “The goal is to accelerate research into the convergence of national defence and AI technology through this industry-university collaborative partnership.”
It was not immediately clear how the boycott would affect KAIST, or how many academic exchanges would be derailed.
The boycott was organised by Toby Walsh, a professor at the University of New South Wales in Sydney.
“If developed, autonomous weapons will … permit war to be fought faster and at a scale great than ever before. They will have the potential to be weapons of terror,” the researchers said, citing effective bans on previous arms technologies.
“We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives,” they said.
AI is the field in computer science that aims to create machines able to perceive the environment and make decisions.
The letter, also signed by top experts on deep learning and robotics, was released ahead of next Monday’s meeting in Geneva by 123 UN member countries on the challenges posed by lethal autonomous weapons, which critics describe as “killer robots”.
Walsh told Reuters there were many potential good uses of robotics and artificial intelligence in the military, including removing humans from dangerous task such as clearing minefields.
“But we should not hand over the decision of who lives or dies to a machine. This crosses a clear moral line,” he said. “We should not let robots decide who lives and who dies.”