Last month, during a powerful speech to the United Nations Security Council, Australia’s Minister for Foreign Affairs, Penny Wong, voiced her concerns about the rapid advancement of artificial intelligence (AI). She highlighted that while AI has tremendous potential in areas like healthcare and education, its implications for warfare and nuclear arms could be perilous. “Nuclear warfare has so far been constrained by human judgement,” she stated, raising alarms about AI’s lack of moral accountability and the risks it poses for humanity’s future. This assertion begs critical questions: Will AI genuinely alter the nature of warfare? And is it truly devoid of accountability?
Understanding AI’s Role in Modern Warfare
The concept of artificial intelligence isn’t new; it originated in the 1950s. Today, however, it encompasses a plethora of technologies — from large language models to computer vision and neural networks. While all these systems are broadly categorized under “AI,” each operates on distinct principles and methodologies.
AI is typically employed to analyze patterns in vast datasets to generate outputs based on given inputs, like text prompts. Among its many applications in warfare, AI technologies are utilized in wargaming simulations for training soldiers. More concerning, however, are decision-support systems like the Israel Defence Force’s “Lavender” system, which purportedly helps in identifying potential combatants. Such technologies raise ethical and moral dilemmas, especially since they are often positioned at the critical junction of life-and-death decisions.
The Accountability Dilemma
Discussions surrounding accountability in AI are prevalent, especially in military applications. The notion of an “accountability gap” has emerged, posing the question of who is responsible when AI systems malfunction or cause unintended harm. Interestingly, this conversation is rarely extended to other technologies that pose significant risks.
For instance, legacy weapons systems like unguided missiles or landmines operate without direct human oversight during their most lethal phases. Yet, in these situations, we seldom question the “culpability” of the weapon itself. Similarly, the recent Robodebt controversy in Australia saw government mismanagement being placed squarely on human error rather than the automated system involved. Why, then, do we scrutinize AI to the extent of placing blame? It’s essential to note that complex systems, including AI, are ultimately developed and managed by humans.
Human Oversight in AI Deployments
Every AI system, particularly those used in military contexts, is embedded within a broader hierarchy of human decision-making. Even though AI might appear to operate independently, it functions under human oversight. The ability of AI systems to make autonomous decisions is often misinterpreted. Understanding their design, development, and implementation is crucial to grasp how responsibility for their use is inherently human.
Accountability in AI systems — whether civil or military — is a matter of human judgment and decision-making. Just as no weapon can be held liable for its actions, AI, as an entity, remains beyond the reach of accountability. Instead, it is the people responsible for creating, deploying, and utilizing these systems who deserve the scrutiny.
The Human Element in AI Warfare
All complex systems, including AI, follow a lifecycle that encompasses everything from initial conception to eventual retirement. Throughout this lifecycle, humans make countless critical choices — from planning and design to development, implementation, and operational management. These decisions encompass not just technical specifications but also ethical considerations and regulatory compliance.
This structure establishes a clear chain of responsibility with numerous intervention opportunities. Therefore, when an AI system is utilized, its abilities — including flaws and limitations — result from cumulative human decisions. AI weapon systems, particularly those entangled in targeting, do not autonomously make decisions on life or death; rather, it is the individuals who elect to employ these systems who bear the ultimate responsibility.
Thus, when discussions around the regulation of AI weapon systems arise, we must recognize that we are essentially seeking to regulate human involvement throughout the lifecycle of these technologies. The notion that AI could fundamentally modify warfare often obscures the reality of human agency and responsibility inherent in military decision-making.
The challenges presented by AI in warfare are significant, posing novel ethical and operational dilemmas. Yet, at their core, these challenges invariably circle back to the human element — the creators, operators, and policymakers who direct the deployment of these advanced technologies.
Inspired by: Source

