Killer Robots and War Crimes: Who Goes to Jail When AI Makes the Kill Decision?

By Alexis Anagnostakis

Executive Summary

Autonomous weapons systems are here. Military drones that select and kill targets without human intervention are being deployed in conflicts worldwide—but international criminal law has no clear answer for who gets prosecuted when these AI systems commit war crimes. This article examines the growing accountability crisis as “killer robots” create a legal black hole where nobody can be held responsible for unlawful killings. We analyze why the International Criminal Court’s current framework fails, explore real-world cases from Ukraine to Libya, and propose concrete legal solutions before this responsibility gap becomes irreversible.

Keywords: autonomous weapons, killer robots, AI war crimes, ICC prosecution, command responsibility, algorithmic warfare, military AI, international criminal law


I. The Accountability Crisis: When AI Kills, Who’s Guilty?

Picture this: It’s 2024, somewhere in a conflict zone. A military drone is circling overhead. Inside its digital brain, algorithms are processing thousands of data points per second—heat signatures, movement patterns, facial recognition data.

Then it makes a decision. Target acquired. Threat level: high. Collateral damage: acceptable.

The drone fires.

But here’s the problem: the AI got it wrong. Those weren’t enemy combatants. They were civilians fleeing the fighting. Children. Families. All dead.

Under any reasonable interpretation of international law, this is a war crime. But here’s the question that should keep us all up at night: Who goes to prison?

The commander who deployed the drone? He followed all the rules. The system passed every test.

The programmer who wrote the code? She wrote it years ago for legitimate defense purposes. She had no idea it would make this specific mistake.

The defense contractor who built it? They sold a product that met military specifications and got government approval.

The AI itself? You can’t exactly put an algorithm in handcuffs.

Welcome to the accountability crisis of the 21st century.

This Isn’t Science Fiction Anymore

If you think this scenario sounds far-fetched, think again. It’s already happening:

Libya, 2020: A UN report documents what may be the first autonomous killing by AI. Turkey’s Kargu-2 drone—described as a “lethal autonomous weapons system”—reportedly hunted down and attacked human targets without any operator input. It just… decided. And killed.

Right now, today: Over 30 countries have deployed some form of autonomous weapons. Israel’s Iron Dome makes split-second decisions about incoming threats. Russia’s Lancet drones are being used in Ukraine with increasing autonomy. China is racing to develop AI-powered military systems that can make decisions faster than any human possibly could.

The technology is advancing at breakneck speed. The law? It’s still stuck in the 20th century, trying to apply rules written for human soldiers to machines that learn, adapt, and kill on their own.

Why This Should Terrify You

Here’s what makes autonomous weapons different from every other military technology that came before:

Traditional warfare: A soldier pulls a trigger. A commander gives an order. A pilot drops a bomb. There’s a clear human decision-maker. Someone we can hold accountable.

Algorithmic warfare: An AI processes data according to programming and patterns it learned from training. It makes decisions in milliseconds based on logic even its creators can’t fully explain (what AI researchers call the “black box problem”).

When things go wrong—and they will—international criminal law has no idea what to do about it.

Judge Antonio Cassese, one of the founding fathers of modern international criminal law, said it best: international criminal law exists “not to punish States but individuals.”

But what happens when there’s no individual you can clearly blame?

What This Article Will Show You

I’m going to walk you through five critical issues:

  1. What autonomous weapons actually are (beyond the Hollywood hype)
  2. Why every legal framework we have completely fails to hold anyone accountable
  3. What international institutions are doing (spoiler: mostly talking)
  4. Real solutions that could actually work (if we act fast enough)
  5. Why this matters for everyone (not just lawyers and generals)

Here’s the brutal truth: Without immediate legal innovation, we’re heading toward a world where war crimes committed by machines go completely unpunished.

Seventy-five years ago, we built an international legal system to ensure “never again” after the Holocaust. We said individuals would be held accountable for atrocities. No more hiding behind “just following orders.”

Now we’re on the verge of creating a new loophole: “the AI did it.”

And if we let that happen, the entire architecture of international criminal accountability—everything we’ve built since Nuremberg—collapses.

Let me show you why.


II. Let’s Talk About What These “Killer Robots” Actually Are

Before we dive into the legal nightmare, we need to understand what we’re actually dealing with. And let me be clear: when I say “autonomous weapons,” I’m not talking about Terminator. I’m talking about technology that exists right now, being used in real conflicts, making real life-and-death decisions.

The Three Levels of “Autonomy”

Think of it like a spectrum from mindless machine to independent decision-maker:

Level 1: Automated Systems (The Autopilot)

These are basically fancy automatic guns. They follow rigid, pre-programmed rules with zero flexibility.

Example: The U.S. Navy’s Phalanx system automatically shoots down incoming missiles. But it’s not thinking—it’s just executing: “IF missile detected THEN shoot.” Like cruise control for your car, but deadlier.

Level 2: Semi-Autonomous (The Co-Pilot)

These systems can identify targets and suggest actions, but humans are supposed to give final approval.

Example: Israel’s Iron Dome. It spots incoming rockets, calculates interception paths, and recommends responses. Technically, a human operator approves each engagement.

Here’s the problem: when you’ve got seconds (or fractions of seconds) to decide, “human oversight” becomes “human rubber-stamping.” The computer says shoot, and you better trust it because you don’t have time to second-guess.

Level 3: Fully Autonomous (The Independent Operator)

These systems select targets and engage them without any human involvement. They’re given a mission, let loose, and they figure out the rest.

Examples:

  • Turkey’s Kargu-2 drone
  • Israel’s Harpy “suicide drone”

And here’s where it gets genuinely scary…

The “Black Box” Problem: When Nobody Knows What the AI Is Thinking

Remember how I said algorithms work in ways even their creators can’t explain? This isn’t hyperbole. It’s the fundamental challenge of modern AI.

Here’s how it works:

Traditional programming: A human writes explicit instructions. “If X, then Y.” You can read the code and understand exactly what the program will do.

Machine learning: You feed an AI thousands (or millions) of training examples and tell it to figure out patterns. The AI develops its own internal logic for making decisions—often in ways that are completely opaque to humans.

Real-world example: Let’s say you train an AI to identify “hostile forces” by showing it thousands of images. The AI learns to recognize patterns. But what patterns exactly?

  • Did it learn to identify weapons?
  • Or military uniforms?
  • Or maybe it learned that “hostile” correlates with age, gender, or skin color?
  • Or some combination of factors we’d never think of?

We literally don’t know. Even the people who built the system often can’t explain why it makes specific decisions.

Now imagine that AI is making kill decisions.

What’s Actually Out There Right Now (2025)

Let me give you the tour of autonomous weapons that are already operational:

Defense Systems (The “It’s Coming Right At Us” Category)

  • Iron Dome (Israel)
  • Patriot missiles (U.S.)
  • S-400 (Russia)

These systems detect incoming threats and automatically engage. Sure, humans can theoretically override them. But when a missile is traveling at Mach 5, “human oversight” is more theoretical than practical.

Loitering Munitions (The “Suicide Drones”)

These are drones that hang around an area—sometimes for hours—waiting to identify and attack targets on their own:

  • Kargu-2 (Turkey): The Libya incident. UN investigators reported it may have autonomously hunted and attacked retreating soldiers. Read that again: it hunted them. Without a human giving the order.
  • Harop (Israel): Circles for hours, then attacks radar emissions automatically. You launch it, it figures out the rest.
  • Lancet (Russia): Being used extensively in Ukraine with increasing autonomy for target selection.

AI-Enhanced Surveillance (Big Brother with Targeting Recommendations)

Project Maven (U.S.): AI analyzes drone footage to identify potential targets, then hands that analysis to human operators.

But here’s the question nobody wants to answer: If AI finds the target, analyzes the threat, and recommends the strike, but a human pushes the button… who really made the decision?

What’s Coming Next (And It’s Coming Fast)

Drone Swarms (2025-2028)

Imagine dozens or hundreds of drones coordinating with each other, making collective decisions about how to search an area and engage targets. No single drone is controlled by a human. The swarm operates as a hive mind.

Who’s responsible when the swarm makes a mistake? The person who launched it? All of them? None of them?

Hypersonic Autonomous Weapons (Already Here)

Weapons that travel five times the speed of sound. At those speeds, human decision-making is physically impossible. The weapon has to make its own choices about terminal guidance and target discrimination.

Predictive Targeting (The Minority Report Problem)

AI systems that analyze “patterns of life”—communications, movements, behaviors—to predict where targets will be and what they’ll do.

Here’s the nightmare scenario: An AI predicts you’re likely to commit hostile acts based on your pattern of behavior. It recommends a strike. A human approves it based on the AI’s “high confidence” assessment.

You die. Turns out the AI was wrong. You weren’t planning anything.

Who committed the war crime? The AI that made the wrong prediction? The human who trusted it?

Why Militaries Are Racing to Build This Stuff

Look, I’m not naive. I understand why every military on Earth is investing billions in autonomous weapons. The advantages are massive:

Speed: AI makes decisions in milliseconds. Human-operated systems are too slow to compete.

Scale: One operator can manage hundreds of autonomous drones. You multiply your force without needing more soldiers.

Safety: No soldiers at risk. For politicians, this is huge—you can wage war without body bags coming home.

Cost: Once you’ve built them, autonomous systems are cheaper to operate than maintaining human forces.

And here’s the kicker: Even if your country thinks autonomous weapons are ethically problematic, you feel pressure to develop them anyway. Because if your adversary has them and you don’t, you might lose the next war.

It’s a classic arms race. Nobody wants to be the one who brought conventional forces to an AI fight.

The Bottom Line

Autonomous weapons aren’t coming. They’re here. Right now. Being used in actual conflicts. Making kill decisions with minimal or zero human involvement.

And every month, they’re getting more sophisticated.

The law isn’t just a little behind. It’s decades behind.

Which brings us to the real question: When these systems inevitably commit war crimes (and they will), who faces justice?

Spoiler alert: Right now, the answer is nobody.


III. Why Every Legal Tool We Have Completely Fails

Okay, so we’ve got autonomous weapons committing what would clearly be war crimes if a human did them. Surely international criminal law has something we can use to hold people accountable, right?

Wrong.

Let me show you why every single legal pathway we have runs into a brick wall.

Option 1: Prosecute Someone Directly for the Crime

What the law says: To convict someone of a crime, you need to prove two things:

  1. They committed the act (actus reus)
  2. They had criminal intent—a “guilty mind” (mens rea)

This is foundational stuff. It’s why we don’t throw people in prison for accidents. Criminal law punishes people who chose to do something wrong.

The problem: An algorithm doesn’t have a mind. It can’t “intend” anything.

When an AWS misidentifies a school bus full of kids as a military convoy and blows it up, where’s the criminal intent?

  • The AI? It’s software. It processed data. It has no consciousness, no intent, no capacity for guilt.
  • The programmer? She wrote code years ago to help protect soldiers. She didn’t intend for this specific tragedy to happen.
  • The commander? He deployed a system that passed all its tests and got legal approval. He had no specific intent to kill civilians.

“But what if we prosecute the robot itself?”

Some scholars have seriously suggested treating AI systems like corporations—as “legal persons” that can be held responsible.

Here’s why that’s absurd:

  1. Criminal punishment is supposed to deter future crimes. You can’t deter an algorithm. Prison doesn’t scare software. Fines don’t change code.
  2. Criminal trials express moral condemnation. We’re affirming society’s values, saying “this person violated our most fundamental norms.” You can’t morally condemn an algorithm.
  3. Victims deserve human accountability. When war crimes happen, victims need to see human beings held responsible. A trial of a robot doesn’t provide closure or justice.

As Hannah Arendt wrote about the Eichmann trial, these proceedings aren’t just about punishment—they’re about publicly affirming our shared humanity. Robot defendants can’t serve that purpose.

Dead end #1.

Option 2: Hold the Commander Responsible

This seems more promising. Commanders are supposed to be responsible for what happens under their command, right?

What the law says: Under “command responsibility,” military leaders are criminally liable for crimes by their subordinates if:

  • They knew or should have known the crimes would happen, AND
  • They failed to take reasonable steps to prevent them

Why this works for human soldiers:

If a commander knows his troops are undisciplined, prone to violence, and heading into a civilian area… and he does nothing to stop them… and they commit atrocities… he’s criminally responsible. He should have known. He should have acted.

Why this falls apart for AI:

Problem #1: What does “should have known” mean for AI?

Let’s say a commander deploys an AWS. He:

  • Ran extensive tests
  • Got legal approval from military lawyers
  • Used it only in authorized ways
  • Had every reason to believe it would work correctly

Then the AI commits a war crime through some decision-making pattern it developed through machine learning—something nobody could have predicted.

Did the commander “should have known” this would happen? How could he? Even the engineers who built the system can’t predict what it will do in every situation because of the black box problem.

Problem #2: What are “reasonable steps” to prevent AI war crimes?

For human soldiers, reasonable steps are clear:

  • Train them properly
  • Supervise them closely
  • Investigate problems
  • Discipline violations

But for AI systems:

  • Training? Machine learning systems train themselves. And they develop unexpected behaviors despite careful initial training.
  • Supervision? AI makes decisions in milliseconds across huge areas. You can’t supervise that.
  • Investigation? Good luck investigating why a neural network made a specific decision. The black box problem means even technical experts often can’t explain it.
  • Discipline? You can’t discipline an algorithm.

“What about strict liability? Just make commanders automatically responsible?”

Some people argue commanders should face automatic criminal liability for any war crime their AWS commits, regardless of fault.

But international criminal law has never worked that way. We’ve always required some level of culpability—some failure or wrongdoing on the commander’s part.

Strict criminal liability without fault? That’s not justice. That’s just finding someone to blame because we need a scapegoat.

Problem #3: Do commanders even “control” autonomous systems?

Command responsibility requires “effective command and control.” But what does control mean when the whole point is that the system is autonomous?

Commanders can:

  • Turn the system on or off
  • Set rules of engagement
  • Designate where it operates

But they can’t control individual targeting decisions. That’s the entire purpose of autonomy.

Is that enough “control” for criminal responsibility? The law isn’t clear.

Dead end #2.

Option 3: Prosecute the Defense Contractor

Maybe we should go after the companies that build these things?

What the law says: You can be prosecuted for “aiding and abetting” war crimes if you:

  1. Provide practical assistance
  2. That has a substantial effect on the crime
  3. While knowing it will facilitate crimes

Why this seems promising:

Companies that build weapons systems knowing they can’t distinguish civilians from combatants should face consequences, right?

Why it doesn’t work:

Defense #1: “We designed it for lawful use”

Defense contractors argue their systems serve legitimate military purposes. Any war crimes are unintended misuse or malfunction.

This defense is especially strong when governments have approved the system as legally compliant. The company relied on official assurances that their product meets legal requirements.

Defense #2: “We couldn’t predict this specific failure”

Manufacturers designed systems to comply with international law. They tested them extensively. They can’t foresee every possible AI behavior, especially when systems learn and modify themselves after deployment.

Courts have consistently required actual knowledge that assistance will facilitate crimes. “Should have known” isn’t enough for aiding and abetting.

Defense #3: “Too many people between us and the crime”

An AWS passes through many hands before any specific war crime occurs:

  • Military procurement officials evaluate and purchase it
  • Commanders decide when and where to deploy it
  • Operators set its parameters for specific missions
  • The AI makes the actual targeting decision

With so many intervening actors and decisions, how can you prove the manufacturer’s assistance had “substantial effect” on this particular war crime?

The causation nightmare:

Plus, machine learning systems modify themselves. They develop targeting behaviors through autonomous learning, potentially years after the manufacturer sold them.

Holding manufacturers liable for behaviors their systems learned independently, long after deployment, seems like a huge stretch.

Dead end #3.

Option 4: Joint Criminal Enterprise (Everyone’s in It Together)

This is a doctrine developed at the Yugoslavia tribunal. The idea: when groups work together toward criminal ends, everyone in the group shares responsibility for crimes committed, even if they didn’t personally commit them.

Could this apply to AWS?

Imagine: Developers, military planners, and commanders all work together on an AI targeting system. They give it such loose parameters that civilian casualties become routine and predictable. Could they all be part of a “joint criminal enterprise”?

Why this doesn’t work:

Problem #1: Nobody’s pursuing a criminal purpose

Joint criminal enterprise requires a shared criminal purpose. But AWS participants generally believe they’re pursuing legitimate military objectives:

  • Engineers think they’re building lawful defense systems
  • Commanders think they’re using approved military equipment
  • Planners think they’re developing legal capabilities

They’re not like concentration camp guards who knew they were participating in criminal operations.

Problem #2: Where does the “enterprise” begin and end?

Is the “enterprise”:

  • Just the specific AWS development team?
  • The entire military AI program?
  • Everyone in the military who uses AI?

Without clear boundaries, liability becomes impossibly broad. Do you prosecute every engineer who touched any code? Every officer who used any AI system?

Problem #3: War crimes aren’t foreseeable

Even the broadest version of this doctrine requires that crimes be foreseeable consequences of the plan.

But when AI develops unanticipated behaviors through machine learning, how is that foreseeable? The participants may have genuinely believed the system would operate lawfully.

Dead end #4.

Why This All Matters

Here’s what we’re left with:

  • Can’t prosecute the AI – No guilty mind
  • Can’t prosecute commanders – AI’s unpredictability defeats “should have known”
  • Can’t prosecute manufacturers – Knowledge requirements and too many intervening causes
  • Can’t use joint enterprise – No common criminal purpose

The result? A responsibility vacuum.

Nobody’s criminally liable. The war crime happened, victims are dead, but international criminal law points at everyone and no one at the same time.

This creates horrifying incentives:

Military forces might actually prefer autonomous weapons specifically because they insulate decision-makers from criminal liability.

Think about that. A commander might choose to use an AWS instead of human soldiers, not because it’s more effective, but because if things go wrong, he’s less likely to face war crimes charges.

We’re creating a system where algorithmic warfare is legally safer for commanders than traditional warfare—even when it produces the same atrocities.

And here’s the kicker: Human soldiers still get prosecuted for war crimes. But commanders using AI that commits identical acts? They walk free.

Two-tier justice: Criminal liability for humans, impunity for algorithms.

That’s where we are right now. And if we don’t fix it fast, that’s where we’re staying.


IV. What’s the UN and ICC Doing? (Spoiler: Not Enough)

The UN Talks: A Decade of Discussion, Little Action

Since 2014, the United Nations has been discussing autonomous weapons through the Convention on Certain Conventional Weapons (CCW). After 10 years of meetings, here’s what they’ve accomplished:

What everyone agrees on:

  1. International humanitarian law applies to autonomous weapons
  2. Humans must remain responsible for AI decisions
  3. Human-machine interaction needs careful consideration
  4. Some weapons systems might inherently violate IHL

What they can’t agree on:

  • What counts as an autonomous weapon – Narrow vs. broad definitions
  • What to do about them – Ban them completely vs. voluntary best practices
  • How to verify compliance – How would inspectors check classified military AI?

The real problem: Major military powers (U.S., Russia, China, Israel) resist binding restrictions. They argue AWS can actually improve compliance with IHL by removing human emotions from targeting.

Campaign to Stop Killer Robots

Civil society groups are pushing for a complete ban, similar to successful campaigns against landmines and cluster munitions. Their arguments:

  • Martens Clause: Weapons that violate “principles of humanity and dictates of public conscience” should be banned—even if not specifically prohibited
  • Can’t distinguish civilians: Current AI cannot reliably make distinction/proportionality judgments
  • Human dignity: Being targeted by an algorithm violates human dignity

Status: Despite sustained advocacy, major military powers block a preemptive ban.

The ICC’s Position: Cautious and Vague

Current situation:

  • No ICC cases involving autonomous weapons have reached trial
  • The Office of the Prosecutor released a 2024 policy paper acknowledging the challenge but providing no specific guidance

What the ICC has said:

“Emerging technologies, including artificial intelligence and autonomous systems, raise novel questions about criminal responsibility under the Rome Statute. The Office will continue monitoring technological developments…”

Translation: “We know this is a problem, but we’re not sure what to do about it.”

Possible first cases:

  1. The easy case: Commander deploys obviously defective AWS that predictably kills civilians (fits existing command responsibility)
  2. The corporate case: Manufacturer knowingly sells AWS that can’t distinguish military from civilian targets
  3. The systematic use case: Military force uses AWS with inadequate oversight, creating pattern of civilian casualties

The hard case (and most likely scenario): A thoroughly tested, legally approved AWS commits war crimes through unanticipated behavior developed through machine learning. Current law has no answer for this.

Regional Responses: Small Steps

European Union: AI Act (2024) classifies military AI as “high-risk” requiring strict human oversight

National positions:

  • Norway: Won’t develop or use fully autonomous weapons
  • Austria: Supports international restrictions
  • China: Endorses “meaningful human control” (while investing billions in military AI)

Reality check: These positions create moral pressure but have no enforcement mechanism.

Bottom Line

Ten years of UN discussions have produced general principles but zero binding restrictions. The ICC acknowledges the problem but offers no clear prosecution pathway. Civil society advocates for bans that major powers ignore.

Meanwhile, autonomous weapons are being deployed and used in actual conflicts right now.

Next section: Concrete legal solutions to fix this mess.


V. How to Fix This: Concrete Legal Solutions

The accountability gap won’t close itself. Here are practical legal reforms—some requiring new treaties, others just requiring courts to interpret existing law differently.

Quick Fixes: Reinterpreting Current Law

These solutions work within the Rome Statute’s existing framework:

1. Expand Command Responsibility

Courts could interpret “should have known” more strictly when commanders deploy AI:

  • Deployment = knowledge: Treating the decision to deploy AWS as the relevant moment for assessing commander knowledge (not individual targeting decisions)
  • Lower the bar: Given AI’s known limitations, presume commanders should foresee discrimination failures unless they prove adequate safeguards
  • Ongoing monitoring duty: Require commanders to continuously monitor AWS performance and immediately withdraw systems showing failures

2. Strengthen Corporate Liability

Make it easier to prosecute manufacturers:

  • Constructive knowledge: Companies should know current AI can’t reliably distinguish civilians—that’s sufficient for liability
  • Duty to warn: Failure to clearly warn military customers about AWS limitations = contributing to war crimes
  • Post-sale obligations: If manufacturers learn deployed systems are failing, they must notify customers or face liability

3. Adapt Joint Criminal Enterprise

Apply JCE to AWS development and deployment:

  • Systems-based liability: Participants in creating inherently illegal weapons systems share responsibility for resulting crimes
  • Willful blindness: Developers/commanders who deliberately avoid testing that would reveal problems face liability

Major Reforms: Changing International Law

1. Amend the Rome Statute

Add AWS-specific provisions to Article 8 (war crimes):

New War Crime: Deploying Inadequate Autonomous Systems

It is a war crime to deploy autonomous weapons systems incapable of reliably distinguishing between civilians and combatants or making proportionality assessments.

New War Crime: Failing to Maintain Human Control

It is a war crime to use autonomous weapons systems without meaningful human control over targeting decisions.

Amended Command Responsibility (Article 28):

Make commanders criminally responsible for AWS war crimes when they deploy systems without ensuring:

  • Technical capability to comply with IHL
  • Adequate human oversight
  • Effective kill switches
  • Comprehensive testing under realistic conditions

Corporate Criminal Responsibility (New Article 25bis):

Hold corporations criminally liable for:

  • Manufacturing AWS that cannot reliably comply with IHL
  • Failing to warn about system limitations
  • Continuing to supply AWS after learning of systematic failures

2. New Treaty Protocol

A Protocol VI to the Convention on Certain Conventional Weapons could establish:

Prohibited Systems:

  • AWS without meaningful human control over targeting
  • AWS incapable of distinguishing military from civilian targets
  • AWS without effective human intervention mechanisms
  • Swarm systems lacking human oversight of individual units

Required for Legal Systems:

  • Pre-deployment testing and certification
  • Real-time human monitoring
  • Automatic recording of all targeting decisions
  • Immediate notification to UN upon any civilian casualty

Verification:

  • International registry of deployed AWS
  • Technical inspection protocols
  • Incident investigation procedures
  • Real sanctions for violations

3. Request ICJ Advisory Opinion

Ask the International Court of Justice to clarify:

  • Do fully autonomous weapons violate IHL as a class?
  • Does deploying AWS without meaningful human control violate the Martens Clause?
  • What legal obligations do states have regarding AWS development?
  • Can AI satisfy IHL’s judgment requirements?

Interim Solutions: What We Can Do Now

While waiting for major reforms, implement these measures:

Technical Standards

  • ISO/IEEE standards for military AI testing
  • Minimum explainability requirements
  • Human-machine interface standards
  • Safety certification (like aviation standards)

Military AI Ethics Boards

  • Independent review before deployment
  • Ongoing monitoring of deployed systems
  • Authority to reject deployment proposals
  • Public reporting (within security limits)

Voluntary Military Commitments

  • Maintain human control over all targeting decisions
  • Implement “human in the loop” requirements
  • Comprehensive testing before deployment
  • Immediate withdrawal upon discrimination failures
  • Transparency about capabilities (general parameters)

Corporate Governance Requirements

  • Internal ethics review processes
  • Human rights impact assessments
  • Product stewardship for deployed systems
  • Report discrimination failures to customers and international bodies

The Challenge: Technology Moves Fast

By the time treaties are negotiated and ratified, AWS capabilities will have evolved. Legal frameworks must:

  • Focus on principles (meaningful human control) not specific technologies
  • Emphasize outcomes (IHL compliance) not means
  • Build in adaptability (regular review conferences, technical advisory bodies)
  • Establish clear responsibility chains from development through deployment

What Needs to Happen Next

Short-term (2025-2027):

  • ICC issues preliminary guidance on AWS prosecution
  • Major military powers adopt voluntary best practices
  • Technical standards organizations publish AWS safety protocols

Medium-term (2027-2030):

  • Rome Statute amendments proposed and debated
  • Regional courts begin prosecuting AWS-related cases
  • UN produces draft treaty protocol

Long-term (2030+):

  • Binding international treaty enters force
  • First major ICC prosecution of AWS war crime
  • International registry of autonomous weapons established

Without these steps, we face systematic impunity for algorithmic war crimes.


VI. Common Objections (And Why They’re Wrong)

“But AI Could Be More Humane Than Human Soldiers!”

The argument: AI doesn’t panic, seek revenge, or get emotional. Properly programmed AWS might make better legal decisions than stressed soldiers.

Why it fails:

  1. We’re comparing apples to oranges – The question isn’t whether AI is better than the worst human soldiers, but whether it can meet IHL’s absolute requirements
  2. Current AI demonstrably can’t do this – No existing system can reliably assess proportionality, recognize surrender, or understand complex cultural contexts
  3. Future promises don’t solve current problems – Advocates say AI will get better, but systems are being deployed now
  4. The accountability problem remains – Even if AI made perfect decisions, who’s responsible when it doesn’t? The responsibility vacuum still exists

“Legal Restrictions Will Put Us at Military Disadvantage”

The argument: If we restrict AWS and adversaries don’t, we’ll lose future wars. Opponents using fully autonomous systems will overwhelm forces limited to human decision-making.

Why it fails:

  1. All IHL restrictions face this objection – Adversaries might ignore prohibitions on targeting civilians, torture, or treachery. We maintain these rules anyway
  2. Autonomous weapons create risks too – They may attack friendly forces, escalate conflicts unpredictably, or create diplomatic disasters
  3. Regulations can accommodate security concerns – Rules don’t require banning all autonomy—just meaningful human control, testing, and accountability
  4. Major powers share interests – Russia, China, and the U.S. all face domestic pressure on AI ethics and risk being targeted by adversaries’ AWS. Mutual restrictions may serve everyone’s interests

“The ICC Already Has Too Many Cases”

The argument: The ICC is understaffed, underfunded, and struggling with current cases. Adding complex AWS prosecutions requiring technical expertise will overwhelm the Court.

Why it fails:

  1. Ignoring the problem makes it worse – The accountability gap undermines the Court’s fundamental purpose. If AWS enable war crimes without consequences, the entire system loses legitimacy
  2. Early attention builds capacity – The Court can develop expertise more effectively when addressing issues prospectively rather than reactively
  3. These aren’t entirely new frameworks – Many solutions build on existing doctrine (command responsibility, corporate liability) rather than creating wholly new approaches
  4. Difficulty doesn’t eliminate duty – The Court can’t ignore crimes because they’re hard to prosecute

“Technology Will Solve These Problems”

The argument: Current AI limitations are temporary. As technology improves, AWS will reliably comply with IHL. Better testing, explainable AI, and technical standards will address accountability concerns without legal reforms.

Why it fails:

  1. This is a legal problem, not just technical – Even perfect AWS raise questions about human responsibility for algorithmic decisions
  2. How long should we wait? – Systems are being deployed now. We can’t defer accountability indefinitely hoping for future improvements
  3. Some limitations may be intractable – The “black box problem” stems from fundamental features of machine learning, not temporary technical gaps
  4. The arms race continues – As defensive systems improve, offensive systems adapt. Competition doesn’t necessarily improve IHL compliance
  5. Mistakes still happen – Even improved systems will occasionally err. When they do, someone must be accountable

VII. The Choice We’re Making Right Now

Let me be blunt: We’re at a fork in the road, and we’re running out of time to choose which path to take.

On one path, we adapt our legal systems to ensure human accountability remains central to warfare, no matter how advanced our weapons become.

On the other path, we create a world where war crimes go unpunished because “the algorithm did it.”

Right now, we’re sleepwalking down the second path.

Here’s What Keeps Me Up at Night

Autonomous weapons aren’t some distant future threat. They’re deployed. Right now. In actual conflicts. Making kill decisions with minimal or zero human involvement.

And when these systems inevitably make mistakes—when they misidentify civilians, when they fail to recognize surrender, when they violate the laws of war—nobody faces justice.

The commander says: “I followed all the rules. The system was approved.”

The programmer says: “I wrote lawful code. I couldn’t predict this specific failure.”

The manufacturer says: “We built to specifications. This wasn’t our intent.”

The AI says nothing. Because it’s just code.

And the victims? They get nothing. No accountability. No justice. No acknowledgment that what happened to them was a crime.

This Isn’t Just About Legal Theory

Let me paint you a picture of what we’re heading toward:

It’s 2030. There’s a conflict in some corner of the world you might not even hear about on the news. Both sides are using autonomous weapons—because at this point, everybody is.

An AI-controlled drone swarm sweeps through a village. The algorithms decide that certain patterns of movement, certain heat signatures, certain communications patterns indicate hostile forces.

They’re wrong.

Twenty-three civilians die. Farmers. Teachers. Kids walking home from school.

The commander who deployed the swarm followed every protocol. The system passed every test. Military lawyers signed off on it. The manufacturer met all specifications.

In 2030, with our current legal framework, nobody goes to trial. Nobody gets convicted. The incident gets investigated, a report gets filed, maybe there’s some diplomatic tension.

But criminal accountability? Zero.

Now multiply that incident by dozens, hundreds, thousands of times across multiple conflicts. Because if nobody’s being held accountable, what’s stopping anyone from deploying increasingly autonomous systems?

That’s the future we’re building right now.

What’s Really at Stake

Let’s zoom out for a moment. This isn’t just about autonomous weapons. It’s about whether international criminal law—the entire framework we’ve built since the Nuremberg trials—can survive the 21st century.

Seventy-five years ago, the world came together and said: “Never again.”

We said individuals would be held accountable for atrocities. No more hiding behind “just following orders.” No more “I was just doing my job.” If you commit war crimes, you face justice.

That principle—individual criminal accountability—is the foundation of the entire system. The International Criminal Court, the tribunals, all of it rests on that bedrock idea.

And now we’re creating a loophole big enough to drive an autonomous weapons system through.

If we allow algorithmic warfare to proceed without clear accountability, we’re not just creating impunity for future war crimes. We’re telling the world that our most fundamental legal and moral principles apply only when it’s convenient.

The Solutions Exist (If We Act Fast)

Look, I’ve laid out concrete solutions in this article:

Short-term fixes:

  • Courts interpreting command responsibility more broadly
  • Prosecutors pursuing corporate liability more aggressively
  • Ethics review boards evaluating AWS before deployment
  • Technical standards for AI safety and testing

Long-term reforms:

  • Amending the Rome Statute to address AWS specifically
  • Negotiating new treaty protocols
  • Establishing international registries and verification
  • Creating clear legal frameworks before the technology becomes ubiquitous

None of this is impossible. It just requires political will.

The question is: Do we have the courage to act before it’s too late?

What Needs to Happen (And What You Can Do)

If you’re a policymaker: Push for AWS restrictions in every international forum. Support Rome Statute amendments. Establish national ethics review boards with real teeth. Don’t wait for consensus—lead.

If you’re in the military: Demand clear accountability frameworks before deploying new autonomous systems. Implement strict human oversight requirements. Create robust testing protocols. Remember: “legal approval” isn’t enough if the system can’t reliably comply with IHL.

If you’re a prosecutor: Start building expertise now. Test expansive interpretations of existing law. Be ready for the first AWS cases. Don’t wait until the problem is everywhere.

If you work for a defense contractor: Implement real ethics review, not just legal compliance boxes to check. Provide clear warnings about system limitations. Monitor deployed systems. Report failures. Remember: “We met the specs” won’t be a defense when your company’s product commits war crimes.

If you’re a concerned citizen: Support organizations like the Campaign to Stop Killer Robots. Pressure your government to negotiate binding restrictions. Demand transparency about military AI programs. Make noise. This issue gets ignored because most people don’t understand it—change that.

The Window Is Closing

Here’s the brutal truth: Every month that passes, autonomous weapons become more normalized. More countries deploy them. More military doctrines incorporate them. More industries invest in them.

Once algorithmic warfare becomes the norm, establishing accountability becomes exponentially harder. You can’t put the genie back in the bottle.

We have maybe 5-10 years to get this right. Maybe less.

After that, we’ll be trying to retrofit accountability onto systems that are already everywhere, shaped by a decade or more of development without legal constraints, defended by powerful economic and military interests who don’t want restrictions.

Or we can act now, while there’s still time to shape how this technology develops and how it’s used.

A Final Thought

Hannah Arendt wrote about the “banality of evil”—how ordinary people commit atrocities by just following procedures, just doing their jobs, just trusting the system.

Imagine a future where war crimes are committed by algorithms that learned their targeting behaviors through opaque processes nobody fully understands, deployed by commanders following approved protocols, built by engineers who genuinely believed they were creating lawful systems.

Nobody intends evil. But evil happens anyway. And nobody faces justice because nobody technically did anything wrong.

That’s not the world we want to build.

The technology is here. The legal gaps are real. The consequences will be measured in innocent lives.

The choice is ours. But we need to choose. Now.

Because here’s what I know for certain: The future of warfare might be autonomous.

But the future of war crimes accountability must remain human.

And if we let that principle die without a fight, we don’t deserve the legal system our grandparents built after the Holocaust.

What are we going to do about it?


Take Action Now

  • Support the Campaign to Stop Killer Robots
  • Demand your government support AWS restrictions at the UN
  • Share this article with policymakers, military leaders, and concerned citizens
  • Stay informed about developments at the ICC and in autonomous weapons policy
  • Speak up – The biggest barrier to action is that most people don’t know this crisis exists

The accountability gap won’t close itself. It requires people like you deciding it matters enough to act.


Bibliography

Treaties and Statutes

Rome Statute of the International Criminal Court, July 17, 1998, 2187 U.N.T.S. 90.

Geneva Convention (IV) Relative to the Protection of Civilian Persons in Time of War, Aug. 12, 1949, 6 U.S.T. 3516, 75 U.N.T.S. 287.

Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), June 8, 1977, 1125 U.N.T.S. 3.

Convention on Certain Conventional Weapons, Oct. 10, 1980, 1342 U.N.T.S. 137.

Cases

Prosecutor v. Tadić, Case No. IT-94-1-A, Judgment (Int’l Crim. Trib. for the Former Yugoslavia July 15, 1999).

Prosecutor v. Perišić, Case No. IT-04-81-A, Judgment (Int’l Crim. Trib. for the Former Yugoslavia Feb. 28, 2013).

Prosecutor v. Taylor, Case No. SCSL-03-01-A, Judgment (Special Court for Sierra Leone Sept. 26, 2013).

Prosecutor v. Bemba, ICC-01/05-01/08, Judgment (Int’l Crim. Ct. Mar. 21, 2016).

Prosecutor v. Al Mahdi, ICC-01/12-01/15, Judgment and Sentence (Int’l Crim. Ct. Sept. 27, 2016).

Prosecutor v. Ongwen, ICC-02/04-01/15, Trial Judgment (Int’l Crim. Ct. Feb. 4, 2021).

Books

Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil (1963).

Asaro, Peter. “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making,” 94 Int’l Rev. Red Cross 687 (2012).

Docherty, Bonnie. Killing Made Easy: The Case Against Autonomous Weapons (2023).

Crootof, Rebecca. “War Torts: Accountability for Autonomous Weapons,” 164 U. Pa. L. Rev. 1347 (2016).

Heyns, Christof. Autonomous Weapons Systems and Human Rights Law (2016).

Sagan, Scott D. The Limits of Safety: Organizations, Accidents, and Nuclear Weapons (1993).

Scharre, Paul. Army of None: Autonomous Weapons and the Future of War (2018).

Sassòli, Marco. International Humanitarian Law: Rules, Controversies, and Solutions to Problems Arising in Warfare (2019).

Strawser, Bradley Jay (ed.). Killing by Remote Control: The Ethics of an Unmanned Military (2013).

Articles and Reports

Boulanin, Vincent & Maaike Verbruggen. “Mapping the Development of Autonomy in Weapon Systems,” Stockholm International Peace Research Institute (2017).

Crootof, Rebecca. “The Killer Robots Are Here: Legal and Policy Implications,” 36 Cardozo L. Rev. 1837 (2015).

Garcia, Denise. “Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” 1 Int’l Stud. Rev. 1 (2018).

Guarini, Marcello & Paul Bello. “Robotic Warfare: Some Challenges in Moving from Non-Autonomous to Autonomous Systems,” in Robot Ethics (2011).

Horowitz, Michael C. & Paul Scharre. “Meaningful Human Control in Weapon Systems: A Primer,” Center for a New American Security (2021).

International Committee of the Red Cross. “Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control” (2019).

Lin, Patrick, George Bekey & Keith Abney. “Autonomous Military Robotics: Risk, Ethics, and Design,” California Polytechnic State University (2008).

O’Connell, Mary Ellen. “Banning Autonomous Killing: The Legal and Ethical Requirement that Humans Make Near-Time Lethal Decisions,” in The American Way of Bombing (2014).

Roff, Heather M. “The Strategic Robot Problem: Lethal Autonomous Weapons in War,” 13 J. Mil. Ethics 211 (2014).

Sharkey, Noel. “Saying ‘No!’ to Lethal Autonomous Targeting,” 9 J. Mil. Ethics 369 (2010).

United Nations Institute for Disarmament Research. “The Weaponization of Increasingly Autonomous Technologies: Concerns, Characteristics and Definitional Approaches” (2017).

United Nations Office of the High Commissioner for Human Rights. “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions,” UN Doc. A/HRC/23/47 (2013).

Wagner, Markus. “The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems,” 47 Vand. J. Transnat’l L. 1371 (2014).


More posts

channels4_profile