Professor Toni Erskine1
1The Australian National University, Canberra, Australia
Biography:
Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU) and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is Chief Investigator of the ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ Research Project, funded by the Australian Government through a grant by Defence. She has also served as Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP)/Association of Pacific Rim Universities (APRU) ‘AI for the Social Good’ Research Project (2021-2024) and in this capacity worked closely with government departments in Thailand and Bangladesh. She has served as Director of the Coral Bell School at the ANU (2018-23) and Editor of International Theory: A Journal of International Politics, Law & Philosophy (2019-23). Her research interests include the impact of new technologies (particularly AI) on organised violence; the moral agency and responsibility of formal organisations in world politics; the ethics of war; the responsibility to protect vulnerable populations from mass atrocity crimes; cosmopolitan theories and their critics; and the role of joint purposive action and informal coalitions in response to global crises and existential threats. She is the recipient of the International Studies Association’s 2024-25 Distinguished Scholar Award in International Ethics.
Abstract:
Artificial intelligence (AI), the evolving capability of machines to imitate aspects of intelligent human behaviour, has the potential to radically transform and disrupt global politics. AI-enabled entities – from algorithmic systems to automated robots – can already steer our preferences, predict the future, aid decision making, create images, converse, and kill. As such, they can be used to variously influence elections, intervene in the politics of other states, advise our leaders, and both contemplate and conduct war. Yet, this paper will argue that a profound source of disruption and harm in global politics that accompanies the advent of these intelligent machines is not inherent in the technology itself. Rather, these AI-enabled entities are potential agents of global disorder because of our misperception – and sometimes wilful misrepresentation – of their capacities, relative autonomy, and status when we employ them to perform these functions. Specifically, we tend to imbue these systems with characteristics that they cannot possibly possess and (often conveniently) attribute responsibility to them that is, in fact, borne elsewhere. We expect less of the individuals and states that employ them as a result – and thereby collectively diminish ourselves. This paper will maintain that this particular risk can be countered by understanding the nature and limits of this new form of synthetic agency – an endeavour that the discipline of IR has hitherto neglected but is particularly well-placed to pursue.