9

As the reports I have linked below refer to military machines, namely robots, drones, and the like, these are what I mean by autonomous machine. These military machines have, to my knowledge up to now, to be controlled by a military operative, whether in the country the machine is in, or remotely via satellite in another country. These reports suggest no operative is controlling these machines and the machines themselves are determining whether the person is an enemy or ally, breaking all three of Isaac Asimov's Three Laws of Robotics.

The magazine, New Scientist reported that:

Military drones may have autonomously attacked humans for the first time ever last year, according to a United Nations report. While the full details of the incident, which took place in Libya, haven’t been released and it is unclear if there were any casualties, the event suggests that international efforts to ban lethal autonomous weapons before they are used may already be too late.

Looking at the idea, apparently, it comes from a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War.

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.

The Verve who provided this quote, states

What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

Nevertheless, a number of publications including CNET have been reporting that

Libyan forces were "hunted down and remotely engaged" by an autonomous drone

So, have autonomous machines distinguished enemies from allies, and killed them without human supervision?

In response to comments made, I would hope for (if possible) a well documented case that either

  • a) a weapon system targeted a person and that the on-board discrimination capabilities were engaged or
  • b) (if a) is not possible) that it is feasible and very possible that what is reported has actually happened
DJClayworth
  • 57,419
  • 26
  • 209
  • 195
Chris Rogers
  • 2,628
  • 2
  • 16
  • 36
  • 6
    I assume you do not intend to include self-driving cars in the question? – GEdgar Jun 14 '21 at 12:23
  • 1
    Well, the source (UN Security Council) looks as trustworthy as it gets and the remarked text leaves no doubt. But the quoted text does not say they killed, only they engaged in combat. Personally, I think it is highly probable that those raids ended on human deads. – bradbury9 Jun 14 '21 at 12:27
  • 20
    Isn't a landmine also an "autonomous machine"? It can't move, but its programming can decide to explode when a human is near. And landmines killed and injured plenty of humans without supervision. – Philipp Jun 14 '21 at 13:02
  • by "autonomous machine" do you means something intended to be use for killing or any accident in a factory works if the line is automated ? – Bougainville Jun 14 '21 at 14:05
  • 1
    @Bourgainville - As the reports I have linked refer to military machines, namely robots, drones, and the like, these are what I mean by *autonomous machine*. Robots and drones etc. have to my knowledge up to now have to be controlled by a military operative, whether in the country the machine is in, or remotely via satellite in another country. These reports suggest no operative is controlling these machines and the machines themselves are determining whether the person is an enemy or ally. – Chris Rogers Jun 14 '21 at 14:11
  • @ChrisRogers I thought I had answered the question as asked, but it has been downvoted: I see you added "the machines themselves are determining whether the person is an enemy or ally" after I began preparing the answer. Perhaps that requirement should be moved to the top of the question. – Weather Vane Jun 14 '21 at 14:39
  • 2
    @ChrisRogers thank you for editing. I have deleted the 'doodlebug' answer, which now does not satisfy the 'friend or foe' autonomy. I wrote it in response to the UN's "Military drones may have autonomously attacked humans for the first time ever." Even if the 1940's technology seems crude by today's standards, they were military drones which worked without human intervention, killing thousands of people. – Weather Vane Jun 14 '21 at 15:13
  • 8
    This is degenerating into quibbles about definitions. Is a landmine a "fire, forget and find" autonomous machine, that doesn't require data connectivity? Does finding such ambiguities in the definitions help anyone at all? – Oddthinking Jun 15 '21 at 11:57
  • This is why I defined the term for the purposes of this question. I agree @Oddthinking – Chris Rogers Jun 15 '21 at 12:48
  • 4
    Land mines are pretty simplistic, but how about captor mines? They sit there in the ocean until they hear the target they're programmed for and fire a torpedo at it. There are also anti-radiation missiles that can be told to hover over an area and attack any transmitter they find, but I don't know if they have ever been fired in anger. – Loren Pechtel Jun 16 '21 at 03:07
  • @LorenPechtel Do captor mines determine enemy from ally or do they fire indiscriminately? If the former, they fit the criteria for this question – Chris Rogers Jun 16 '21 at 07:16
  • @ChrisRogers They can be simple and fire at anything or they can be fancy and fire only at specific ships. Whether any have been fired in anger I do not know. – Loren Pechtel Jun 16 '21 at 15:03
  • 1
    I believe this will be a tricky question to answer as it depends on the technical details of the system's capabilities. See this article: https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/ this highlights the fact that probably no one, other than system users actually know what kind of discrimination capabilities the system actually has. One thing that seems clear to me from the descriptions of the system is that it *is capable of* targeting people or vehicles w/o having those targets vetted by a human operator. Whether it has done so or not is another question. – Dave Jun 16 '21 at 18:15
  • Are you asking in general or about this Libya/Kargu-2 claim? – Fizz Jun 16 '21 at 18:30
  • @Fizz I am primarily asking based on the Libya claim, but it is open for any other situations that fit the criteria. – Chris Rogers Jun 16 '21 at 19:38
  • The Three Laws of Robotics are not actual laws. – forest Jun 18 '21 at 22:42
  • 6
    And, to agree with @forest, a good chunk of Asimov's work is about how the three laws weren't absolute, or caused problems of various sorts, or were subverted in one manner or another. They are a narrative device for driving the story. Including them here is a red herring. – Clockwork-Muse Jun 19 '21 at 04:28
  • Related link: https://en.wikipedia.org/wiki/Lethal_autonomous_weapon I seems to remember it being "illegal" for an autonomous military systems to actually select a target and fire at it. There needs to be human intervention somewhere in the process to select the target and/or decide whether to fire. Can't remember much more than that, or what "illegal" means in this context, but there is a section in the wiki page about ethical and legal issues. I don't think anyone has tried to push it that far, so I'm 99% sure the answer to this Q is "no". – ewanc Jun 29 '21 at 13:31
  • Also "remotely engaged" from the quote in the Q suggests that they were technically engaged by the operator of the drone, who was piloting it remotely. – ewanc Jun 29 '21 at 13:34
  • Also related: https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and. I'll try to write a proper answer when I have more time – ewanc Jun 29 '21 at 14:53
  • 1
    Do the sources claim anywhere that there was a distinction made between enemy and allies, as the first sentence in the Q requires? I see claims about enemies being actually engaged, but that does not mean allies would not have been attacked if in the area. – bukwyrm Jan 19 '22 at 13:55
  • See the second quote @bukwyrm - *"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2"* – Chris Rogers Jan 19 '22 at 15:08
  • 1
    I only see "UAV hunted X" not "UAV hunted X but not Y" - just because the UAV did strike at some soldiers, it does not mean it would have spared others. Faction X was retreating, so we do not know if Faction Y was in the specific same area. That would require IFF, and i do not see that capability on the manufacturers HP: https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav – bukwyrm Jan 19 '22 at 15:27
  • I'm unclear what the level of proof required here is. The manufacturer claims that Kargu-2 has "facial recognition capabilites" and the Kargu-2 has been used. We don't know the details of how the weapon's autonomous capabilities were used in the engagements, or how well the facial recognition technology works. But at some level, we know that an autonomous weapon system has been used. Is the requirement that we have a well documented case that a weapon system targeted a person, that the on-board discrimination capabilities were engaged, and the attack resulted in casualties? – Dave Jan 19 '22 at 15:40
  • 1
    @Dave I would hope for (if possible) a well documented case that either a) a weapon system targeted a person and that the on-board discrimination capabilities were engaged or b) (**if a) is not possible**) that it is feasible and very possible that what is reported has actually happened. – Chris Rogers Jan 19 '22 at 15:45
  • So, if we take out all the vague wording like "military machines, namely robots, drones, and the like", and the "no true Scotsman" arguments over exactly how autonomous something needs to be, we come down to either a) "Has the Kargu-2 fired a weapon with on-board discrimination capabilities engaged?" (possibly answerable, but might require non-public military reports) or b) "Does the Kargu-2 _have_ on-board discrimination capabilities?" (definitely answerable, but no longer really the same claim). – IMSoP Jan 19 '22 at 16:15
  • 1
    The current version of the STM capabilities brochure includes the statement "Precision strike mission is fully performed by the operator, in line with the Man-in-the-Loop principle" for their systems. https://www.stm.com.tr/uploads/docs/1628858259_tacticalminiuavsystems.pdf? To opine, I suspect that these have been added in response to the bru-ha-ha that use of this system has induced – Dave Jan 19 '22 at 16:38
  • Also please note that the Kargu-2 seems to be itself the munition - they have another one that, rather hilariously (or not), can drop a single mortar-shell, but the Kargu-2 seems to be a quadshaped-grenade. Also note that face-recognition is not the recognition of a specific face, but rather of any face - this is a hard enough problem given a combat situation, but would of course be helpful when picking off people - they tend to have faces. – bukwyrm Jan 19 '22 at 17:04
  • "These reports suggest no operative is controlling these machines and the machines themselves are determining whether the person is an enemy or ally, breaking all three of Isaac Asimov's Three Laws of Robotics." What are you talking about? Unless the you're referring to the actual warhead, they aren't violating the third law requiring a robot to preserve its own existence, and even if you're talking about warheads, the second law takes precedence, and they are obeying human orders. – Acccumulation Jan 20 '22 at 06:46
  • The only law you could say they're violating the first, and even that one could be argued (if they *don't* kill a terrorist, are they violating the "by inaction, allow a human to be killed" part of the first law? The three laws become rather tricky when they collide with real world morality.) – Acccumulation Jan 20 '22 at 06:46

2 Answers2

7

Yes. As early as the Falkland War, though this is eminently dependent on the definition of 'autonomous', 'machines' and 'supervision'.

'Fire and forget' munitions have existed for quite some time, and have already killed people, e.g. the Exocet system, where the user designates a location and target type (outside of the user's optical field of view), and launches the weapon. It then arrives at the designated location, and 'chooses' it's final destination by active radar scanning. Thus, the user needs only a hazy idea of the target ('ship at or around coordinates xy') and the weapon will do the rest. Should more than one ship be at approximately that position, one is 'chosen' by the weapon.

The user of Exocet thus currently has to will anybody on a ship around coordinates xy to die, and Exocet will then target a specific part of a specific ship, and make them die. There is no supervision in the final approach - the user might opt to self-destruct the Exocet, but that would not be based on communications from the missile itself.

Some distinctions:

  • Shooting an iron-sight gun at a perceived threat might be the baseline of killing at a distance - here the user triggers a machine mechanically, using their own senses for information, and their own mind for the decision about the parameters of the hostility. Friendly fire / civilian victims (ff/cv) under these circumstances is widespread, and the target, even if a member of an opposing military force, is mostly unknown to the user. Even firing on 'targets' that are not line of sight is possible (and done) using the parabolic flight path of the projectile.
  • Shooting a gun by aiming through a digital night vision scope extends this scenario by introducing sensory input produced by a machine - the effects of this might be non-trivial
  • Shooting a weapon that launches missiles that have some sort of self-arming mechanism related to some quality of the target (i.e. not only self-arming after a set time, but rather self-arming in response to e.g. a distance-to-target measurement) introduces another variable. The weapon may receive this input by error (i.e. not in the vicinity of the intended target), and there is no supervision in where it actually explodes. This is Exocet.
  • Arming a weapon that will fire at a later moment, this moment being determined by a sensory input to the weapon, is another step, common to many area-denial systems.

The act of launching 'Drones' that loiter over a prescribed area, for a prescribed time, and attack a type of target without further intervention, could either be viewed as a drawn-out version of the principle that Exocet represents, or of making mines location-variable and shorter lived, so it would, to my mind, only represent a change in quantity, not quality, of characteristics of existing weapons.

From the perspective of the victim: Staying at an unknown (to the enemy) location and not moving have, up to now, not guaranteed survival (carpet bombing, poison gas, nuclear weapons). Moving undetected by humans from one location to the other added the threat of mines (land- or sea-)(You may notice that all the aforementioned weapons face intense criticism). Having one's location (or some characteristic of the location, like radar-emissions, or being a lot of metal on an otherwise watery surface) known added the threat of autonomous missiles. The kind of weapon in your link does not extend these threats qualitatively, but quantitatively: The characteristic sought for could be anything ('human form', 'object travelling at >10km/h, ...), and can apply at any moment in a long timeframe.

Article about this subject

bukwyrm
  • 1,585
  • 1
  • 20
  • 31
  • 1
    Comments and edits clarified that a key distinction was whether the "autonomous machines" could distinguish friend from foe, so I don't think these examples count unfortunately. (I've edited the question to make that requirement more prominent, because it was easy to miss.) – IMSoP Jan 19 '22 at 13:23
  • 2
    @IMSoP So a flying killing machine that loiters for a few hours, and will precision-kill any kind of armored vehicle by aiming and firing Hellfire is now not an autonomous weapon under that guideline? That is ... an interesting take. – bukwyrm Jan 19 '22 at 13:44
  • I agree, the requirements are bordering on "no true Scotsman", but if you look at the comments under the question, at least one previous answer was deleted for not meeting the definition. – IMSoP Jan 19 '22 at 13:55
  • 1
    I also think land-attack cruise missiles fit in the grey area. They've definitely killed people. However their targeting is at the building level rather than against individuals; but they do discriminate that target building from other non-target buildings during their operation. – Dave Jan 19 '22 at 15:38
  • @Dave I disagree--a land attack cruise missile goes exactly where the operator programs it to go. There is no target selection on it's part, merely matching up it's pre-programmed target with what it sees. – Loren Pechtel Jan 25 '22 at 01:48
  • @LorenPechtel my understanding of DSMAC, the tomahawk’s terminal guidance system, is that it’s comparing the observed scene to onboard images to determine which building is the right building https://en.m.wikipedia.org/wiki/TERCOM – Dave Jan 25 '22 at 12:32
  • 1
    @Dave Yes, but that's just executing the orders that were given by a human, it isn't an autonomous decision. – Loren Pechtel Jan 26 '22 at 04:14
  • @LorenPechtel Seal Team 6 were just carrying out orders that were given to them by a human. If there was a robot tasked with “Go to this building and kill Osama Bin Laden” and it did so, while successfully ignoring other people that would count for this question as I understand it. Buildings are easier in that they don’t move around, but LACM and ship to ship missiles are doing something that is on the continuum that leads to “real” autonomous weapons — they make decisions about which objects to target based on the observed characteristics. – Dave Jan 26 '22 at 04:40
  • @Dave An anti-ship missile has a lot more autonomy than a Tomahawk. Still, however, while it might make a mistake it's going to be aimed at a known ship, not simply sent into an area to look for a ship. The only weapons I'm aware of that I would call autonomous are captor mines and loitering anti-radiation missiles--but I don't know if either have been fired in anger. I suspect not on the captors--there hasn't really been a war where they could be used. – Loren Pechtel Jan 26 '22 at 05:26
-4

Yes, kinda.

Mines have been around for centuries and I'd call those autonomous military machines. Mines have killed countless people, on their own.

Contact mines work by someone (or a vehicle) merely touching them, influence mines by someone (or a vehicle) merely coming close and triggering the influence mechanism (whether magnetic, acoustic, heat, pressure, etc.).

They are so prevalent and so hard to get rid of after a conflict has ended that efforts to ban them from use have been going on for decades with limited success (mainly because many countries see them, rightly so, as an effective means to fight against a numerically and/or technologically superior enemy, they're perfect for long term area denial). And they do last a long time, we're still digging up mines from WW2 (and maybe WW1) 70 years after that war ended, and some of those are still dangerous.

jwenting
  • 3,904
  • 28
  • 29