‘Borgs in the Org’ and the Decision to Wage War: The Impact of AI on Institutional Learning and the Exercise of Restraint

Prof. Toni Erskine1

1Australian National University, Australia

Biography:

Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU) and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is Chief Investigator of the ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ Research Project, funded by the Australian Government through a grant by Defence. She has also served as Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP)/Association of Pacific Rim Universities (APRU) ‘AI for the Social Good’ Research Project (2021-2024) and in this capacity worked closely with government departments in Thailand and Bangladesh. She recently served as Director of the Coral Bell School at the ANU (2018-23) and Editor of International Theory: A Journal of International Politics, Law & Philosophy (2019-23). Her research interests include the impact of new technologies (particularly AI) on organised violence; the moral agency and responsibility of formal organisations in world politics; the ethics of war; the responsibility to protect vulnerable populations from mass atrocity crimes; cosmopolitan theories and their critics; and the role of joint purposive action and informal coalitions in response to global crises and existential threats. She is the recipient of the International Studies Association’s 2024-25 Distinguished Scholar Award in International Ethics.

Abstract:

Artificial intelligence (AI) will increasingly infiltrate what is arguably the most consequential decision that we can collectively make: the decision to wage war. While ample attention has been paid to the emergence and evolution of AI-enabled systems used in the conduct of war – including lethal autonomous weapons systems under the confronting banner of ‘killer robots’ – the prospect of AI driving this necessarily prior stage of war-making, the crucial determination of whether and when to engage in organised violence, has received less attention. Following recent studies that have begun to redress this relative neglect by examining particular risks and opportunities that would accompany the infiltration of AI into resort-to-force decision making, this paper will explore and evaluate another, hitherto overlooked, potential consequence of this anticipated development. Namely, we will consider how the use of such AI-enabled systems is likely to alter the very structures, cultures, and capacities of those collective bodies charged with exercising forbearance in the resort to war – and what impact this transformation could have on their propensity to do so.