top of page

Should Big Tech Support National Defense?

  • Writer: Isaac Taylor
    Isaac Taylor
  • Mar 14
  • 6 min read

In his farewell address in 1961, US President Dwight D. Eisenhower warned about the dangers of “unwarranted influence” of the military industrial complex. This referred to the emerging collaboration between private companies and the defense and security sectors. If we need a reminder of how central private companies have become to this partnership, we need only consider the recent decision by the US government to designate Anthropic a “supply chain risk” after the tech company refused to give the Department of Defense unrestricted access to its AI systems.

 

In a statement, Dario Amodei, CEO of Anthropic, put forward his company’s red lines: use cases where Anthropic was not prepared to contribute. One of these was autonomous weapons systems. While the idea of an autonomous weapons system has been characterized in different ways, it roughly refers to a weapons system that can select and engage targets without direct human involvement.[i]

 

Depending on what definition we use, fully autonomous weapons systems may not exist yet, but AI has been incorporated into weapons in a number of ways. Israel utilized systems like “The Gospel” and “Lavender” in their operations in Gaza, and the US has incorporated AI tools developed under “Project Maven” to assist with target identification in Iran.

 

Private actors’ reluctance to support military objectives is not new. Albert Einstein’s research ultimately made the development of the atomic bomb possible. But Einstein later expressed regret about writing to President Roosevelt urging the development of nuclear weapons after the US used them on the Japanese cities of Hiroshima and Nagasaki.

 

Anthropic’s refusal to support autonomous weapons can be viewed as another worry about the development of problematic weapons. But this event also raises a deeper worry than whether AI can follow the rules of war. Even if autonomous weapons could behave ethically, their development by private companies risks shifting control over war away from democratic institutions and towards Silicon Valley.

 

Ethical War-Fighting

Amodei’s worry is primarily about the current state of technology. He claims that ‘frontier AI systems are simply not reliable enough to power fully autonomous weapons’. These concerns are not only about the effectiveness of the weapons that can be created, but their ability to stay within ethical boundaries that we think apply to fighters in armed conflicts.

 

In war, it is widely thought that targeting civilians is off-limits: we call those who do this “terrorists”. But could an autonomous weapons system reliably tell the difference between a soldier and a civilian? Maybe not. Another ethical limit is proportionality: the laws of war specify that the destruction caused in an act of war must not outweigh the military advantage gained. But how do we trade off the fuzzy notion of military advantage with the quantifiable deaths that a given operation causes? It is generally assumed that existing AI systems would not be able to engage in the sort of context-sensitive judgement that is required to apply this principle.[ii] One might wonder whether we can rely on humans to do this as well, of course, although proponents of this sort of argument would at least argue that they have the capacity to do this. Machines, at least in their current iteration, lack the relevant capacities completely.

 

These worries, of course, do not impose an absolute barrier to the ethical use of autonomous weapons systems. All we may need to do is wait for better AI systems to come along. This seems to be Amodei’s view: he maintains that fully autonomous weapons systems ‘may prove critical for our national defense’. If we are one day able to build AI systems that can abide by the ethical rules of war, this objection dissipates.

 

Democracies and War

Suppose that Anthropic could build autonomous weapons systems that equaled or exceeded the ethical behavior of the average US soldier. Should they be used? More principled objections have been raised to the use of these based on the apparent “responsibility gap” that they create. When a machine makes a decision about who will be killed in war, it is sometimes thought, someone should be accountable for that. But since, first, nobody can predict or control the behavior of autonomous weapons systems and, second, the systems themselves are not the sorts of agents that are subject to responsibility judgements, we are left with a situation where nobody can be held to account.[iii]

 

Practitioners and ethicists have responded to this worry by arguing that organizational or technological interventions can ensure that human responsibility can be maintained. These would allow humans to better predict and control the behavior of autonomous weapons systems, and thus become responsible. I will not take a stand on whether these fixes can succeed here.[iv] What I want to draw attention to is a consideration often ignored: it may matter who is responsible.

 

In democracies, political action is often thought to be done “in the name” of the people. Despite laws being created by politicians and implemented through those working in the public sector, they are, in well-functioning democratic societies, not the imposition of a small group of individuals pursuing their private interests, but rather a collective enterprise.

 

The same might be true of war. When soldiers fight a war, they do so in our name. If their actions are to be legitimate, they must ultimately be traceable back to us: this is the central idea behind the principle of civic control of the military. We have many institutions for ensuring that we maintain control: military training, politically-determined codes of conduct, and a chain of command. What happens, however, when privately-produced AI systems take on the tasks previously assigned to soldiers?

 

It has long been recognized that technologies like AI are not value-neutral tools, but rely on implicit moral judgements. When a company manufactures an AI system, it should be conscious of the values that are going to be realized in its behavior. Values related to ethical war fighting should, of course, be included. But if we want to maintain democratic control over war, the values of the wider political community should also be considered. When a private company operates outside the standard democratic forums in developing autonomous weapons systems, we run the risk of substituting the will of the people for the private values of the company itself.

 

If companies like Anthropic eventually build these systems, they will not simply be supplying tools. They will be embedding their own value judgments into the infrastructure of war. Imagine, for instance, an autonomous targeting system built by a Silicon Valley company. Its designers must decide how cautious it should be when distinguishing civilians from combatants. Should the system strike when it is 60% confident? 80%? 95%? Answering this is impossible without taking a stand on certain moral questions.

 

One may not worry about this. So long as AI is not making the decision to launch a war (and, for the moment at least, that decision is staying firmly in human hands), the decisions taken within a war regarding targeting and the like can appropriately be delegated to AI systems, or so it might be claimed. After all, these decisions are currently delegated to military personnel, who cannot be subject to strict oversight given the realities of modern war. Nonetheless, political control sometimes is exercised over these sorts of decisions in some ways. Consider, for example, the policy guidance issued under the Obama Presidency, where lethal force is only permitted in counterterrorist operations when there is “near certainty” that a terrorist target is present. What the increasing use of AI threatens, I suggest, is that certain forms of this political control are undermined. This is a significant transfer of the authority to make life-and-death decisions.

 

The debate about autonomous weapons is often framed as a technical question: can AI be made to follow the rules of war? But the deeper question is political. If the systems that decide how wars are fought are designed inside private companies, then control over war may quietly drift away from democratic institutions and into corporate hands. The challenge is not just to build more ethical machines but to ensure that the authority to wage war remains where it belongs: with political institutions and, ultimately, the public whose name it is fought in.


Isaac Taylor is an associate professor of practical philosophy at Stockholm University. His research focuses on ethical issues surrounding the use of military force and the adoption of AI techniques. He is the author of The Ethics of Counterterrorism (Routledge, 2018).


Notes:

[i] In a much-cited definition, for example, the US Department of Defence defines an autonomous weapon system as a ‘weapon system that, once activated, can select and engage targets without further intervention by an operator’. US Department of Defence, ‘Autonomy in weapon systems’, Directive 3000.09, 2012, pp.13–14.

[ii] These worries are most clearly articulated in Noel Sharkey, ‘The Evitability of Robot Warfare,’ International Review of the Red Cross 94(886), 2012, pp.787-799.

[iii] The classic statement of this worry is found in Robert Sparrow, ‘Killer Robots,’ Journal of Applied Philosophy 24(1), 2007, pp.62-77.

[iv] For an assessment of the prospects, see Isaac Taylor, ‘Is Explainable AI Responsible AI?’ AI & Society 40 (3), 2025, pp.1695-1704; Isaac Taylor, ‘Collective Responsibility and Artificial Intelligence,’ Philosophy & Technology 37: 27, 2024, pp.1-18.


Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.

Join our mailing list for post alerts

  • Facebook
  • X
  • RSS
Original_without_effects_on_transparent_

The views expressed in these posts are those of the author(s), and do not necessarily reflect the views of the Public Ethics blog or associated organisations.

©2020 by Public Ethics.

bottom of page