Meaningless Human Control: Learning from Israel’s use of military AI

Dr Jeremy Moses1

1University of Canterbury, Christchurch, New Zealand

Biography:

Jeremy Moses is Associate Professor in the Department of Political Science and International Relations at the University of Canterbury, Christchurch, New Zealand. His research interests are in the ethics of war and intervention, with a particular focus on realism, pacifism, humanitarianism, and military technology. His publications include the book Sovereignty and Responsibility and articles in journals including Review of International Studies, International Politics, Cooperation and Conflict, Critical Studies on Security, Journal of Intervention and Statebuilding, and Digital War.

Abstract:

A number of articles on the use of AI-powered military technologies have been published throughout Israel’s genocidal assault on Gaza. These have included investigative pieces on the use of targeting systems known as ‘the Gospel’ and ‘Lavender’, as well as reports on the use of armed quadcopter drones and robotic quadrupeds that may have some autonomous or semi-autonomous features. The paper gives an overview of the claims that have been made about these technologies before considering the implications for campaigners seeking to regulate or ban the technologies. In particular, I will argue that the concept of ‘meaningful human control’, which has been central to the Campaign to Stop Killer Robots and its affiliated organisations, is shown to be a pointless and meaningless concept for the control of military AI technologies. Campaigners will need to reckon with this case in an honest and forthright way if they are to develop more viable and powerful concepts to the argument for legal regulation of military AI.