The line between civilian and military research has become more permeable in recent years. The increasing sophistication of weapons systems has prompted speculation of a "third revolution" in warfare, in which information technologies allow war to be fought at an unprecedented speed and intensity. While the darkest imaginings of Hollywood films like "Terminator" remain a distant prospect, researchers are increasingly forced to assess their involvement in programs that raise important moral and ethical questions.
Last week, for example, news that the Korea Advanced Institute of Science and Technology (KAIST) would open an artificial intelligence research center in cooperation with Hanwha Systems, a defense company, prompted 50 AI scientists to call for a boycott of work with KAIST because of fears that the research would "accelerate the arms race to develop [autonomous] weapons." This will, said the signatories in a letter, "permit war to be fought faster and at a scale greater than ever before. ... They have the potential to be weapons of terror. ... This Pandora's box will be hard to close if it is opened."
The president of KAIST, Shin Sung-chul, responded by noting that the university was "significantly aware" of ethical concerns regarding AI, and it had no intention to develop "lethal autonomous weapons systems or killer robots" or "conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control." He added that "As an academic institution, we value human rights and ethical standards to a very high degree." At the same time, however, KAIST will continue cooperation with Hanwha's defense business unit.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.