Who makes the decision on the battlefield – a human or an algorithm?

02.02.2026
Who makes the decision on the battlefield – a human or an algorithm?. 02.02.2026. The panel featured Peter Bovet Emanuel, a military lecturer at the Swedish Defence University; Tanel Tammet, professor at the Department of Software Science at Tallinn University of Technology and an AI expert; Lieutenant Colonel Herwin Meerveld from the Netherlands Ministry of Defence, Coordinator of the Data Science Centre of Excellence; and Major Janar Pekarev of the Estonian Defence Forces, Deputy Head for Innovation at the Command of Future Capabilities and Innovation. When pace begins to drive the decision The exchange began with a straightforward question: does artificial intelligence primarily change the pace of military decisions or their substance? That quickly led to a follow-up: what does an accelerating decision cycle mean in practice – and what happens when pace itself starts to dictate decision-making? According to Peter Bovet Emanuel, the impact of AI is already evident. Decisions are made faster, particularly at the tactical and operational levels. Yet speed is not a value in itself. If tempo is not consciously managed, decision-making can devolve into constant reaction. “Decision-making tempo is accelerating, and at some point within that spiral of decisions we must deliberately slow it down – otherwise there will be no room for anything other than ever-faster decisions,” Emanuel noted. This implies that some decision-making authority will inevitably shift to machines. The key question is how to shift that authority without pushing humans to the margins of the process. Herwin Meerveld argued that it is not AI that drives the pace of warfare higher; rather, an already fast and unpredictable environment pushes those operating within it to seek technological support. In the dynamics of current conflicts, humans may no longer be able to keep up with every stage of the decision process. For Meerveld, the adoption of AI is therefore unavoidable – though the human, the algorithm and their interaction must be understood as a single integrated whole. Whom should we trust – the human or the machine? To what extent, then, should a commander trust artificial intelligence? In Tanel Tammet’s view, AI should be treated much like people – it cannot be trusted blindly. The problem arises when a human remains formally part of the decision chain but no longer has the time or ability to intervene. “In such a situation, there are two options – either you retain continuous control yourself, or you largely let go of that control. Often, the AI itself signals that it cannot cope and asks you to take over,” Tammet explained, comparing the situation to an automatic parking assistant. In other words, a system must not only provide recommendations; it must also be capable of honestly indicating when it has reached its limits. According to Herwin Meerveld, AI is primarily intended to respond to the accelerated tempo of warfare, where humans cannot always keep pace with every stage of the decision process. This makes it essential to develop AI systems that demonstrate both effectiveness and responsibility. The key issue is not to distrust AI outright, but to understand its capabilities and limitations – and to treat it as part of an integrated whole in which humans and technology must function together. “Decision-making tempo is accelerating, and at some point within that spiral of decisions we must deliberately slow it down – otherwise there will be no room for anything other than ever-faster decisions.” Can artificial intelligence decide over life and death? The discussion intensified when Major Janar Pekarev raised the issue of delegating the use of force to machines. In his view, killing in itself is primitive and technically simple – what is difficult is not destroying a target, but deciding when and against whom force should be used. He asked his fellow panellists directly: what does it actually mean to delegate the use of force to machines? Are we prepared to grant systems the authority to make targeting and fire decisions? And what is the worst-case scenario that comes to mind when taking such a step? Herwin Meerveld cautioned against reducing the debate to the question of who pulls the trigger. Military decision-making is not, at its core, merely about killing people. Warfare, he noted, encompasses far more – logistics, defence planning, manoeuvre and escalation management. If the focus is placed solely on lethal force, a large part of the real decision space remains overlooked. Bovet Emanuel added that experience from the past two decades – particularly in counterterrorism operations – has narrowed the understanding of targeting too much and made it overly person-centric. In public debate, targets are primarily associated with individuals, although in practice there are five categories of targets, only one of which directly concerns people. The others include command nodes, infrastructure, weapons systems or logistical objects. According to Emanuel, artificial intelligence may in some cases even prove more impartial than a human: “AI can in certain situations be more neutral – it does not seek revenge, nor is it swayed by emotion.” At the same time, he stressed that everything depends on context. In an existential war, risk, proportionality and responsibility are inevitably assessed in a different light than in a limited conflict. “AI can in certain situations be more neutral – it does not seek revenge, nor is it swayed by emotion.” Quality metrics When artificial intelligence is used in decision-making, an inevitable question arises: by what criteria should the quality of such decisions be assessed? According to Herwin Meerveld, the first step is to agree on what constitutes a “good decision” in a military context. Even without AI, that is a complex issue. Only once there is a shared definition can one meaningfully discuss technical criteria – such as pre-deployment testing in realistic environments, validation procedures and verification of system reliability. Tanel Tammet offered a more cautious assessment, noting that AI is still used relatively sparingly in the military domain. As a result, it remains difficult to systematically compare the quality of AI-supported decisions with other forms of decision-making. By contrast, Bovet Emanuel cited a joint Swedish–Norwegian exercise in which an AI-based planning tool proved more efficient than its human users. In his view, quality cannot be reduced to a simple claim of superiority: “The question is not only whether the decision is better – but also whether it is faster, more precise, whether it reduces cognitive load or improves reliability.” He argued that each of these metrics must be clearly defined before a system is developed in the first place. Alliance in the age of artificial intelligence Towards the end of the discussion, the focus shifted to NATO: is the alliance capable of integrating artificial intelligence into its decision-making processes when member states operate under different doctrines, datasets and legal frameworks? Pekarev highlighted the issue of interoperability – models trained in different environments may not function seamlessly together. This is not merely a technical detail. According to Meerveld, the main obstacle is not technology but political will. In the near term, AI should be implemented where data exchange mechanisms are already in place. In the longer term, value may lie in solutions that allow countries to develop shared systems without having to exchange sensitive data. When asked whether AI could also accelerate NATO’s political decision-making – for instance, the invocation of Article 5 – the answer was unequivocal. “Article 5 is a political decision and should not be conflated with military AI,” Meerveld stressed. Overall, the discussion underscored that artificial intelligence is no miracle weapon capable of replacing humans in military decision-making. It is a tool that sharply exposes existing tensions: between speed and responsibility, efficiency and legitimacy, technology and politics. The question is not whether AI is coming – it is already here – but how to ensure that humans remain at the centre of decision-making in the future. “Article 5 is a political decision and should not be conflated with military AI.”
Herwin Meerveld (with the microphone) argued that it is not AI that drives the pace of warfare higher; rather, an already fast and unpredictable environment pushes those operating within it to seek technological support. Foto: EstMil.tech

Herwin Meerveld (with the microphone) argued that it is not AI that drives the pace of warfare higher; rather, an already fast and unpredictable environment pushes those operating within it to seek technological support. Foto: EstMil.tech

One of the most substantial discussions at the defence conference EstMil.tech, co-organised by Tallinn University of Technology, was a panel in which defence experts from different countries and backgrounds examined the military role and potential of artificial intelligence. The question was simple yet sharp: whether – and to what extent – does AI change decision-making on the battlefield?

The panel featured Peter Bovet Emanuel, a military lecturer at the Swedish Defence University; Tanel Tammet, professor at the Department of Software Science at Tallinn University of Technology and an AI expert; Lieutenant Colonel Herwin Meerveld from the Netherlands Ministry of Defence, Coordinator of the Data Science Centre of Excellence; and Major Janar Pekarev of the Estonian Defence Forces, Deputy Head for Innovation at the Command of Future Capabilities and Innovation.

When pace begins to drive the decision

The exchange began with a straightforward question: does artificial intelligence primarily change the pace of military decisions or their substance? That quickly led to a follow-up: what does an accelerating decision cycle mean in practice – and what happens when pace itself starts to dictate decision-making?

According to Peter Bovet Emanuel, the impact of AI is already evident. Decisions are made faster, particularly at the tactical and operational levels. Yet speed is not a value in itself. If tempo is not consciously managed, decision-making can devolve into constant reaction.

“Decision-making tempo is accelerating, and at some point within that spiral of decisions we must deliberately slow it down – otherwise there will be no room for anything other than ever-faster decisions,” Emanuel noted. This implies that some decision-making authority will inevitably shift to machines. The key question is how to shift that authority without pushing humans to the margins of the process.

Herwin Meerveld argued that it is not AI that drives the pace of warfare higher; rather, an already fast and unpredictable environment pushes those operating within it to seek technological support. In the dynamics of current conflicts, humans may no longer be able to keep up with every stage of the decision process. For Meerveld, the adoption of AI is therefore unavoidable – though the human, the algorithm and their interaction must be understood as a single integrated whole.

Whom should we trust – the human or the machine?

To what extent, then, should a commander trust artificial intelligence? In Tanel Tammet’s view, AI should be treated much like people – it cannot be trusted blindly.

The problem arises when a human remains formally part of the decision chain but no longer has the time or ability to intervene. “In such a situation, there are two options – either you retain continuous control yourself, or you largely let go of that control. Often, the AI itself signals that it cannot cope and asks you to take over,” Tammet explained, comparing the situation to an automatic parking assistant. In other words, a system must not only provide recommendations; it must also be capable of honestly indicating when it has reached its limits.

According to Herwin Meerveld, AI is primarily intended to respond to the accelerated tempo of warfare, where humans cannot always keep pace with every stage of the decision process. This makes it essential to develop AI systems that demonstrate both effectiveness and responsibility.

The key issue is not to distrust AI outright, but to understand its capabilities and limitations – and to treat it as part of an integrated whole in which humans and technology must function together.

“Decision-making tempo is accelerating, and at some point within that spiral of decisions we must deliberately slow it down – otherwise there will be no room for anything other than ever-faster decisions.”

To what extent, then, should a commander trust artificial intelligence? In Tanel Tammet’s view, AI should be treated much like people – it cannot be trusted blindly. The problem arises when a human remains formally part of the decision chain but no longer has the time or opportunity to intervene. Photo: ChatGPT

To what extent, then, should a commander trust artificial intelligence? In Tanel Tammet’s view, AI should be treated much like people – it cannot be trusted blindly. The problem arises when a human remains formally part of the decision chain but no longer has the time or opportunity to intervene. Photo: ChatGPT

Can artificial intelligence decide over life and death?

The discussion intensified when Major Janar Pekarev raised the issue of delegating the use of force to machines. In his view, killing in itself is primitive and technically simple – what is difficult is not destroying a target, but deciding when and against whom force should be used.

He asked his fellow panellists directly: what does it actually mean to delegate the use of force to machines? Are we prepared to grant systems the authority to make targeting and fire decisions? And what is the worst-case scenario that comes to mind when taking such a step?

Herwin Meerveld cautioned against reducing the debate to the question of who pulls the trigger. Military decision-making is not, at its core, merely about killing people. Warfare, he noted, encompasses far more – logistics, defence planning, manoeuvre and escalation management. If the focus is placed solely on lethal force, a large part of the real decision space remains overlooked.

Bovet Emanuel added that experience from the past two decades – particularly in counterterrorism operations – has narrowed the understanding of targeting too much and made it overly person-centric. In public debate, targets are primarily associated with individuals, although in practice there are five categories of targets, only one of which directly concerns people. The others include command nodes, infrastructure, weapons systems or logistical objects.

According to Emanuel, artificial intelligence may in some cases even prove more impartial than a human: “AI can in certain situations be more neutral – it does not seek revenge, nor is it swayed by emotion.” At the same time, he stressed that everything depends on context. In an existential war, risk, proportionality and responsibility are inevitably assessed in a different light than in a limited conflict.

“AI can in certain situations be more neutral – it does not seek revenge, nor is it swayed by emotion.”

The discussion intensified when Janar Pekarev (left) raised the issue of delegating the use of force to machines. In his view, killing in itself is primitive and technically simple – what is difficult is not destroying a target, but deciding when and against whom force should be used. Photo: EstMil.Tech

The discussion intensified when Janar Pekarev (left) raised the issue of delegating the use of force to machines. In his view, killing in itself is primitive and technically simple – what is difficult is not destroying a target, but deciding when and against whom force should be used. Photo: EstMil.Tech

Quality metrics

When artificial intelligence is used in decision-making, an inevitable question arises: by what criteria should the quality of such decisions be assessed? According to Herwin Meerveld, the first step is to agree on what constitutes a “good decision” in a military context. Even without AI, that is a complex issue. Only once there is a shared definition can one meaningfully discuss technical criteria – such as pre-deployment testing in realistic environments, validation procedures and verification of system reliability.

Tanel Tammet offered a more cautious assessment, noting that AI is still used relatively sparingly in the military domain. As a result, it remains difficult to systematically compare the quality of AI-supported decisions with other forms of decision-making.

By contrast, Bovet Emanuel cited a joint Swedish–Norwegian exercise in which an AI-based planning tool proved more efficient than its human users. In his view, quality cannot be reduced to a simple claim of superiority: “The question is not only whether the decision is better – but also whether it is faster, more precise, whether it reduces cognitive load or improves reliability.” He argued that each of these metrics must be clearly defined before a system is developed in the first place.

Alliance in the age of artificial intelligence

Towards the end of the discussion, the focus shifted to NATO: is the alliance capable of integrating artificial intelligence into its decision-making processes when member states operate under different doctrines, datasets and legal frameworks? Pekarev highlighted the issue of interoperability – models trained in different environments may not function seamlessly together. This is not merely a technical detail.

According to Meerveld, the main obstacle is not technology but political will. In the near term, AI should be implemented where data exchange mechanisms are already in place. In the longer term, value may lie in solutions that allow countries to develop shared systems without having to exchange sensitive data.

When asked whether AI could also accelerate NATO’s political decision-making – for instance, the invocation of Article 5 – the answer was unequivocal. “Article 5 is a political decision and should not be conflated with military AI,” Meerveld stressed.

Overall, the discussion underscored that artificial intelligence is no miracle weapon capable of replacing humans in military decision-making. It is a tool that sharply exposes existing tensions: between speed and responsibility, efficiency and legitimacy, technology and politics.

The question is not whether AI is coming – it is already here – but how to ensure that humans remain at the centre of decision-making in the future.

“Article 5 is a political decision and should not be conflated with military AI.”