Learning Weapons with Self-Generating Code
Heather Roff (Senior Research Fellow in the Department of Politics and International Relations at the University of Oxford) gives an informative discussion on ethical issues raised by “learning weapons with self-generating” code (LWSGC) in her blog post, “Escape of the Gaak: New technologies and the ethics of war.” Roff seems to see two layers to the question concerning the ethics of LWGSC. One concerns the automation of warfare and the other concerns the possible autonomy of the automaton.
As Roff points out in her blog, the current automation of warfare has already ceded to many weapon systems relative autonomy even when human beings are still in the decision-making loop. Good examples of these include the missile systems Roff mentions in her article or navy military helicopters like the Cyclone that can only function through the automation of visioning made possible by TACCO/SENSO display networks. Pilots read the symbols on the displays but what they read there forms a tiny slice of the total video inputs. The majority of the transferred data has to be automated so as not to overwhelm the pilots overseeing the helicopter’s operations.
It is unclear to me whether the advent of LWSGC presents a genuinely new generation of ethical issues (as Roff seems to hold) or whether the ethics of LWSGC simply present us with an extension of the kinds of issues already raised by the automation of war-making violence in general.