Arguments for and against fully autonomous weapons

This list will be used to inform a mock debate that will be performed at the Science Museum in London in the evening of Wednesday 31 May 2017. Arguments will be presented, but not challenged, so this is meant to purely present the initial arguments for and against, not to evaluate or challenge those arguments.

Please feel free to send any further arguments for or against to me at a.martin@gold.ac.uk.

Arguments for

  1. Robots can be more precise than humans, with extremely accurate sensors and motors. This can reduce civilian casualties, and environmental destruction.
  2. Robots will always follow the rules of war, never becoming emotionally unstable in the heat of battle. This will also avoid illegal acts and casualties.
  3. When a robot is destroyed, it costs the country only money, there are no brown envelopes being sent home.
  4. Their behaviour is entirely predictable, you can see how they will act beforehand, and this could even be shared with your enemy so they know exactly when they are or are not going to be considered a threat.
  5. A robot can be completely culpable, being able to record its decision making process, and all sensor readings, to be investigated in the future.
  6. Robots are the only way we can defend against other robots, imagine a system that can shoot down incoming supersonic missiles, you need to respond far faster than a human ever could.
  7. Robots could be a universal peace force, not taking sides, simply suppressing any violence from either side.
  8. Extra battlefield knowledge from an army of sensing robots could guide the attacks of the human army, or more robots, to attack in a very efficient way.
  9. Robots are more likely to take prisoners and treat them better, as they won't be afraid of reprisals.
  10. Robots don't even need to kill and destroy, they could disable equipment and render combatants ineffective in innovative ways.
  11. Robots can be trained in hundreds of extra skills, such as understanding all languages and cultural mannerisms, linguistic analysis, medical, making them able to make much better informed decisions, and perform humanitarian acts.

Arguments against

  1. If a general is only committing robots to war, rather than people he's met personally then they'll think less about taking the risk and are more likely to commit to a war than they otherwise would.
  2. Explaining the rules of war in a language that robots can understand is an impossible task. The rules won't change as the situation changes, so you'll have a robot acting in a way suitable for one situation, whilst it's really a different situation.
  3. Robots in the real world go wrong, no one has ever made a robot that can adapt to the complexities of the real world, let alone an environment where people are actively trying to defeat it.
  4. The person, or team, writing the rules for detecting a combatant and using the weapon have no idea of how the enemy will adapt in response to the methods used to identify them as targets. Imagine they started painting their guns, hiding them under trap doors, or disguising them as legal tools, then the robot will either risk targeting civilians, or being useless.
  5. Humans can decide for themselves when the battle is over, if a robot can't decide to leave the battle then it may be deadly to people when there's no point fighting further. If the robot can decide to leave the battle, then the enemy will be motivated to fake the signal, or the state which makes the robot think the battle is over.
  6. People may start relying on the decisions of the robots as gospel. If a robot decides to kill someone then rather than questioning why the robot decided that, they'll assume the victim must have been worth attacking else the robot would not have attacked.
  7. If the only way to survive in a war against the robots is to be identified as a non-combatant, then the opposing army will necessarily adapt to appear like non-combatants, making the task of distinguishing ever harder. You have an "identification arms-race", with ever more civilian casualties.
  8. If something goes wrong, it's unclear who will be responsible, the general, the robot technicians, the programming team, the sensor manufacturer, and if no one feels like it's their responsibility, no one will consider it fully.