Investigating the Human Role in Warfare: Beyond ‘Control’ and Towards ‘Agency’?

Table of Contents

The role of humans in warfare, especially in the use of force, has been one of the key concerns in the academic, policy, and regulatory debates on the integration of artificial intelligence (AI) into the military domain. This includes ongoing discussions on autonomous weapon systems (AWS) as well those on the Responsible AI framework.

Actors have been using multiple terms when referring to the importance of the human role. For instance, the UK Ministry of Defence writes about “context-appropriate human involvement in weapons which identify, select and attack targets”. Meanwhile, the US Department of Defense Directive 3000.09 on autonomy in weapon systems mentions “appropriate levels of human judgement over the use of force”. More recently, the Paris Declaration on Maintaining Human Control in AI enabled Weapon Systems, a non-binding document adopted at the AI Action Summit in February 2025, states that the signatories “will implement appropriate safeguards relating to human judgement and control over the use of force”.

The most frequently used term appears to be ‘human control’, often qualified with the adjective ‘meaningful’. The concept of meaningful human control (MHC) has inspired many fruitful interdisciplinary debates and publications, including beyond the military domain. At the same time, the notion of MHC is disputed and imagined differently by various groups of actors, and has attracted criticism for its limitations, particularly when it comes defining what counts as ‘meaningful’.

This blog will briefly examine two of these limitations: first, the focus on the use-of-force stage; and second, the assumption of a hierarchical relationship between the human and the AI system. It will then discuss the importance of investigating how the exercise of human agency is affected by human-machine interactions, which is one of the key objectives of the HuMach project.

Limitations of Meaningful Human Control

First, discussions on MHC in autonomous weapons tend to focus on the stage of using force, or the ‘pulling of the trigger’, as it is the critical function of AWS that would not involve direct human intervention after activation. As studies on existing weapon systems such as air defence systems and loitering munitions demonstrate, the mere presence of a human ‘in’ or ‘on’ the loop may not be sufficient as a safeguard, as the quality of the human role in some use-of-force contexts is questionable. For instance, human operators could lack situational understanding or functional understanding of the technologies involved. Some describe this as “nominal” or “meaningless” human input because in practice, it may lack the critical deliberations to exercise an impact on the targeting process.

An overwhelming focus on the operational and tactical stages—where strategic objectives are translated into targeting-related tasks and actions—is limiting, given that there are various challenges related to human-machine interaction that are happening at other points of the targeting process. Developments related to AI-based decision-support systems in warfare illustrate many of these challenges.

While the use of AI DSS officially involves humans in targeting decision-making, there are questions as to how the interaction between humans and networks of potentially multiple AI DSS affects the role of humans in warfare. Many AI DSS are typically employed in tasks such as processing intelligence or analysing data that inform operational and tactical planning directly or indirectly, but happen at other stages. Cognitive biases, trust-related issues, or the broader societal contexts including targeting doctrines and rules of engagement, among other aspects, need to be considered throughout the targeting cycle.

Moreover, such issues are relevant across the lifecycle of AI systems, calling for a lifecycle perspective to examine dynamics of human-machine interaction beyond the stages of operational-level command and control and tactical employment. As Lena Trabucco writes, many people are involved in the research, design, development, testing, evaluation, operation, and review of AI and autonomous systems in the military, therefore a lifecycle perspective “will better capture the various roles involved in that process”.

Second, MHC often appears to presuppose some hierarchical, unidirectional relationship where humans have control over technologies, and that this control must be maintained via various measures. This appears to be based on what Berenice Boutin and Taylor Woodcock call a sort of “binary dichotomy between the autonomy of a system and the control of the human operator, failing to recognize that human-machine relationships and military processes are complex, distributed, intermediated and multidimensional” (p. 187).

Given that human-machine interactions represent complex dynamics, it is important to consider this relationship as a socio-technical system where there is distributed agency between both sides, rather than a hierarchy where one side dominates the other. As Merel Ekelhof points out, a more appropriate lens would be “one that recognizes the distributed nature of control in military decision-making” (emphasis added).

Towards Human Agency

The term ‘agency’ allows adopting a more comprehensive perspective on human-machine interaction in the military domain, for example by allowing us to look at various levels of decision-making: both the micro/individuals levels (the human users, operators, commanders) and the macro/organisational levels where important changes in relation to agency might also be happening.

As Ingvild Bode writes, “The concept of human agency encompasses a broader and more thorough understanding of what decision making and the ability to act entails, rather than solely focusing on the degree of — seemingly unilateral —  control exerted by individual humans”. Human agency also matches current trends in AI development which focus on so-called ‘teaming’ between humans and AI systems, where both work together, rather than one controlling the other.

Focusing on the concept of human agency allows us to comprehensively examine the implications of uses of AI systems altering some human activities, which in turn affects the exercise of human judgement and deliberation in warfare. As Peter Layton highlights, human-machine interaction in warfare “involves a shared cognition or at least a process of distributed, collective thinking” between human and machine agents, which also means “collective thinking about military problems by two entities whose cognition is fundamentally different”.

Potential offloading or sharing of some cognitive tasks to machines, however, involves multi-faceted (legal, ethical, strategic, security, and other) implications which deserve to be debated further. Therefore, it is crucial to investigate these dynamics of distributed agency and the potential governance demands associated with them—which is one of the core objectives of the HuMach project.

Recommended readings

Featured image credit: Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

Share this artcle

Related Posts

Blog, Responsible AI and Human-Machine Interaction
Blog, Governance Demands
Curious for More?

Check Our Publications