The ‘Killer Robots’ Are Us

Jan 29, 2018 · 90 comments
Pilot (Denton, Texas)
This technology has already filtered down to hobbies. People are building these devices at home. Some "authority" saying "do not do this" is like telling a teenager not to text and drive.
Aaron (Orange County, CA)
Like we never knew this day wasn't coming! Oh Please!
RM (Los Gatos, CA)
There was a "Herblock" cartoon many years ago: In the background, a ruined, blazing city. In the foreground a large robot with a tank crushed in its claws. The robot speaks to a general standing before it: "What now, stupid?".
Charles MArtin (Nashville, TN USA)
And now let's add Australia to this toxic mix. You people down under you are no better than the trumpists here in the States. Congratulations that you are going to be in the top 10 killers around the world. That's a goal Donald Trump can really get behind. Doesn't leave you blokes much room for condescending comments in the New York Times anymore does it now mate?
Mike Murray MD (Olney, Illinois)
The author states that he is a military ethicist. Sure, and I am the Queen of the May.
slowaneasy (anywhere)
Me thinks we could stop for a minute and consider the robotlike behavior trained into most military forces. Automated robots are a curse, no doubt. This seems to be just a logical extension of the robotlike behavior needed to achieve the most efficient killing machine. It will be unintended death either way. This way the denial of unintended death is just a bit more available to those highly flawed individuals who see war as a way to solve social problems.
The frog (Nyc)
I think it will come and we need to control it sooner than later. If you want to scare yourself watch this video, not a good projection for sure. https://boingboing.net/2017/11/15/stratoenergetics-introduces-ne.html I am not optimistic as G7 countries would love to engage in wars with no "political damages" (i.e humans death from there side), more robots will mean more wars as no local casualties (except PST) will be face by our soldiers (the easy concept of good and bad guys). Not too late but it will be very soon!
JARenalds (Oakland CA)
If you have not seen any of the videos coming of what this technology looks like/behaves, this ought to chill you to the bone: https://www.sciencealert.com/chilling-drone-video-shows-a-disturbing-vis... I guarantee you that Trump will want a "yuge" # of these to round out our defense arsenal. The genie is out of the bottle and running at breakneck speed to escape these killing machines. Thank you for bringing this information to concerned people like myself.
PK Jharkhand (Australia)
In see. The old "guns don't kill people, people kill people" trick.
traveling wilbury (catskills)
Yes, we all have "the off switch." Which means we all have the on switch too. A solution to the situation Robillard presents is going to require no less than a fundamental change in human nature. Ergo, game over. Sorry.
Dormouse42 (Portland, OR)
As a software engineer I would find writing code for such autonomous weapon system sickening. Utterly immoral. I would never take a job or consulting assignment to do so no matter how much money was being thrown at me. To remove people from actually doing the killing is to make war even more acceptable to nations and too many of their populations. As citybumpkin noted, "create an even bigger separation between the American public and the horrors of war." So many Americans don't have anyone in their families or even amongst their circle of friends who are fighting all our wars these days. I've seen too many who see, at the least the opening phases of a war (the Iraq War) as akin to their favorite sports team playing a good game. I shudder to think where we are heading.
traveling wilbury (catskills)
Not for long.
Stevenz (Auckland)
"at timescales faster than humans can comprehend" This is the phrase that jumps out at me. We have already seen how lightning fast trading can swing stock markets for lack of opportunity to inject good sense or discretion. Such a system enabling thousands of weapons is unconscionable. World Wars could take five minutes.
David Sheppard (Healdsburg, CA)
Ah, yes! Thank you, thank you, thank you for this. As a retired aerospace engineer and veteran who has spent a considerable amount of time in hardware and software design, I am sick of all this talk of artificial intelligence, particularly in war machines, that makes them seem as if they are conscious beings and that those who built them are not responsible for what they do. These things do not think. They respond only in the ways that we design them to respond, and we as scientists, engineers and users are responsible for what they do. It seems that in this modern age so many of us are looking for a license to be irresponsible. Hopefully, more people like Michael Robillard will step forward to set the record straight.
michael (r)
The "guns don't kill people" argument, expanded. Valuable to those who like long philosophical discussions while the defense contractors make their fortunes.
Greg Wetzel (Seattle, WA)
There are several problems with your arguments. Firstly is that the AI systems being developed do not represent a completely defined system that the programmers in your example have traditionally produced. Rather these AIs are programmed to “learn” and then “trained” to task. If an autonomous AI is trained and released into the “wild”, it may continue learning (and modifying its own behavior) without the correcting feedback of a human. Such an AI’s future behaviors are inherently unpredictable; the longer it “learns” and self-modifies without human feedback, the more unpredictable it becomes. Secondly, he argues that a human based system would follow the rules mindlessly as a computer would. In such a system, if an atrocity was about to be committed, one would hope the human, unlike a computer, would suspend execution. In other words, the human (one hopes) would not blindly follow instructions in cases where human judgement against the action intervened.
Dean (Stuttgart, Germany)
If a new technology can be abused, it is ony a matter of time until it is going to be abused. What is primarily needed are robots that reliably destroy killer-robots without harming human lives.
traveling wilbury (catskills)
Good luck with that.
Mike Lipkin (London)
Suppose I have a robot servant who is absolutely loyal to me and loves me (in its own robotic way). One day I am annoyed at someone and in a fit of pique I say 'hate hate xxx, I wish they were dead' My loving robot servant overhears this and commits the deed. Who is responsible for the murder, me or the robot? The only way our legal system can continue to exists is if I am responsible and must go to jail for the murder as if I had done it myself. I am responsible both for owning a dangerous robot and not being careful what I say around it. If we say the robot is responsible then this is the end of law since a criminal could just create robots with appropriate characters to commit the required crimes and point them in the right direction. The responsibility for machine's behaviours must always come down to a human or humans in the end. The existence of intelligent machines is going to raise the ethical requirements on humans to a level we may not be able to attain. As the author rightly notes, we are already trying to duck our responsibilities.
Aristotle Gluteus Maximus (Louisiana)
You have forgotten about Charles Mansion already?
Patrick (Ithaca, NY)
The debate here reminds me of the storyline in the original "Robocop" movie. Is justice better served by a cyborg or a robot? The movie chose the cyborg, as human reasoning and responsibility could be brought to bear on what was otherwise an autonomous killing machine. The machine lacked the human nuance, but just did exactly as it was told to do, consequences be damned. Yes, any artificial system is the result of whatever human programming, bugs and all, that are put into it, but any system that could make the Las Vegas massacre by the human Stephen Paddock look like target practice with a BB gun in comparison, should be seriously thought out, and with great reservation, before being unleashed on the world. Like the atomic bomb, once the genie is out of the bottle, there's no putting him back in.
Phyliss Dalmatian (Wichita, Kansas)
SkyNet. Thanks, GOP.
Mark (MA)
This is all great and noble. But also meaningless to a certain extent. Does anyone really think that ISIL/ISIS/whatever give a rat's behind about what a bunch of elites think when everything around these others is so bad they would prefer to die, taking many others with them, rather than continue living in the present, real hell?
Russell (Rockland County, New York)
Lots of rediculous psycho-babble. Why don't we drop off off one on these "machines"in your "terrorist" infested neighborhood and see how comfortable you feel wherling your trash out to the curb at night.
gnowzstxela (nj)
The decision-procedure for silicon is not functionally identical because a commander is more willing to sacrifice machines than people. The real danger is that robots lull decisionmakers into a lower threshold for resorting to armed force. Wars become easier to start when your own soldiers aren't at immediate risk. But they remain just as difficult to end. Wars that start with robots eventually suck in your soldiers.
Steve Bolger (New York City)
"jihad" is contest of thoughts and purposes that takes place without violence.
JG (Denver)
Good luck stopping autonomous killing machines.
Erica (washington)
they only know what we teach them. should be fairly easy to stop them.
Rocketscientist (Chicago, IL)
Okay, try this scenario on for size. You're a reporter and you've uncovered that David Koch is secretly funding Scott Walker's destruction of the unions in Wisconsin. This really happened, by the way. He has bots watching all news outlets for this information and they've stumbled on you. David calls a shadowy firm via a lawyer. A killer robot shows up at your door. He quietly murders you and disposes of your body. Nobody else reports on the story because you're missing and your newspaper is wondering what happened. See where this is going?
Woof (NY)
Re: lets humans off the hook Human's have been of the hook since pushing a button in Nevada kills a wedding party in Yemen but the US refuses to even acknowledge it. http://www.newsweek.com/wedding-became-funeral-us-still-silent-one-year-...
Patton (NY)
This article is so depressing one despairs for the human race. BUT....then I look across the NYT column and find an article about the Netherlands special police force who protect the rights and welfare. I become hopeful again---maybe the human race is not doomed---I just live in the wrong country.
Doc (New York)
This is just horrible
strider643 (toronto)
Trump has an evil heart.
Laura (Boston)
Thank you for bringing this to my attention. I've seen these sort of things in science fiction literature and movies, but didn't understand how the race to develop this kind of weapon has become real. This is sad. It's moments in history like this that define what we are as a species. Sorry to say I'm not impressed with our attention to climate change and can imagine all kinds of excuses as to why these weapons should be perfected. Your insistence that we cannot remove the responsibility of such weapons from the creators and those who would activate them is very just and spot on. God help us all.
eamonn white (Zurich)
I am sure the author did not intend it but his argument is not dissimilar to the old hoary chestnut, that guns don’t kill people, people do. Thus what am I sure is his condemnation of development of such weapons is obfuscated and weakened in parallel but disparate condemnation of more generic weapons systems.
Mark Gubrud (Chapel Hill, NC)
The author equivocates away moral concerns about killer robots with his rhetoric about them being "expressions of complex institutional arrangements of humans," as if that precludes them from creating moral and security hazards from capabilities that did not formerly exist, or being a dangerous escalation of the major-power arms race. The author disparages the call for a ban on autonomous weapons but does not explain why they should exist. Russia, China and the United States are leading the world into this robot arms race, and their main use for autonomous weapons is against each other. It's the newest and most important front of the nuclear arms race, and it still leads to oblivion.
Aristotle Gluteus Maximus (Louisiana)
The Biological Weapons convention of 1972 was an international treaty intended to limit the research and production of biological weapons. Russia signed it and promptly ignored it, setting up an extensive, very secret, biological weapons program. Even the USA has not destroyed all of its stockpiles of chemical weapons and they still do active research on biological weapons "defense". The weaponized anthrax used in the anthrax attacks shortly after 9/11 was the product of USAMRIID, a disgruntled scientist in their employ, as the official story goes. Americans have no comprehension of the extremely lethal weapons systems our military possesses. Our killer drones most certainly have this technology already, but it would be secret and the operators who would be the moral regulators of the use of such weapons have little authority to question kill orders from commanding officers. The military leadership would want to use autonomous control to relieve the psychological strain the current operators experience now. They are of low rank and expendable. I doubt there are any four star general drone pilot operators. Heinrich Himmler had the same problem with his Ensatzgruppen execution squads in WW2. The excessive killing was affecting the morale of the troops. They were going AWOL, getting sick, taking leave, anything to avoid the task of killing. The current military will use autonomous weapons for the same reasons, to protect the troops from the consequences of killing.
Matt Kuzma (Minneapolis)
"Let’s begin with a basic fact. The capacities of autonomous weapons are designed by humans, programmers whose intentions are written into the system software." That's assuming the software is written at all. More and more autonomous systems are being built to learn and trained with data, at which point nobody really knows the rules by which they operate.
F (Pennsylvania)
Humanity already had killer robots made of flesh and blood. e.g. the German Einsatzgruppen, the Croatian Ustaše, the Ottoman Turks, the Cambodian Khmer Rouge, the Indonesian Pemuda Pancasilaetc, and the objective violence of America's Big Pharma etc. etc. Now the global-techno-complex wants to make yet more machines to do the dirty work of murder. Science and technology will do the very same thing to humanity that it so arrogantly claims other ideologies and religions did in the past, except it will accomplish it more efficiently and more impersonally.
Em (NY)
Whenever I am reminded of what mankind is capable of, I always think of a remark by George Carlin. Homo sapiens had their chance and blew it. Move over, and give another species a shot.
eamonn white (Zurich)
As Chdi says, my head hurts
L'osservatore (Fair Veona, where we lay our scene)
The original lethal autonomous weapons system was and remains the landmine. IF we could remove it from our history, how many lives are instantly improved? The progressive oligarch billionaires' favorite lethal autonomous weapons systems have been Antifa, Occupy Wall Street, Black Lives Matter, and oddly enough, the white nationalist movement, all apparently funded and directed by the same sources funding the hard-left blog-sites training uneducated progressives to hate this President and all non-progressives, and the U.S. as well. The same names circulate between all of these as well as events like the hit-list shooter that attacked GOP Congressmen at Alexandria, Virginia last year.
Dormouse42 (Portland, OR)
That's quite the mix of conspiracy theories you used there, none of them true. However, I do wholeheartedly agree with you on landmines.
citybumpkin (Earth)
I don't know if there is an ethical bright line, but autonomous weapons will likely create an even bigger separation between the American public and the horrors of war. As it is, for most Americans, war is just something you watch on Youtube and experience through video games. Even US military veterans don't experience war in the same way the Iraqis or Syrians do. They may witness the horrors, but they don't have their homes wrecked wholesale and have to watch their children die. When not even the decision-making in human hands anymore, the disconnect between those whose tax dollars pay for the war and those who live it will grow even farther apart.
Davym (Florida)
The author points out one of the reasons I believe humans as a species are doomed and like sophia, I too, being 70, m glad I won't be around for the inevitable end. Unfortunately I have grandchildren that may experience the beginning, at least of our extinction. I believe that the reason we have never encountered aliens has to do with intelligence. It seems inevitable that their are life forms out in the vast universe like ourselves and it seems there should be many that are more advanced than we. I think the problem is that before "they" have reached the technological wherewithal to reach Earth from wherever they are, they get to the stage we are approaching and destroy themselves. If we don't develop socially, psychologically and maybe spiritually(?) in a hurry we too will destroy ourselves before we have a chance to explore other worlds. I don't see this happening. Look at what we have done and are doing to our own planet. I don't fear aliens, I fear humans. It was Pogo Possum (he rarely gets credit) who years ago said, "We have met the enemy and he is us"
Matt Andersson (Chicago)
The autonomy isn't really the issue: traffic lights and photo cells are autonomous; land mines, electric wire, bank security systems, and police stun drones are also. The lethality is what is problematic. But it is attractive to two primary constituents: the State, and retail banking. Nothing is more protected and defended as they contain the central constructs of social control. As for the autonomous killing robot hardware and software, it is primarily produced by the US, China and Israel. It is a market-place opportunism in that regard.
Stevenz (Auckland)
No, the autonomy is the issue, at least as weapons are concerned. Lethal force should be moderated by humans as a safeguard. There is a big However, however. The way police have widely adopted a shoot-to-kill approach to suspects, human moderation may not be all it once was. The American penchant for violent recourse to any problem should give us pause about the effectiveness of - or even desire - management of such weapons. Automating them just gives humans a way to evade responsibility for "accidents."
Ed (Old Field, NY)
It requires both understanding of artificial intelligence and experience of a battlefield environment. Civilians might be surprised that they are already a couple of steps behind the current state of the art.
Duane Coyle (Wichita)
Is there money to be made in developing so-called autonomous killing machines? If so, they will be built. They will be built first by the U.S. and European countries because those countries are, first, politically sensitive to casualties suffered by their own armies. If a volunteer military is easier to deploy politically because conscripting soldiers for combat is unpopular and thus narrows those scenarios wherein suffering casualties is acceptable, imagine the drooling on the part of politicians and military brass contemplating the day they can deploy machines and thus further reduce the risk of political objections by nearly eliminating casualties on our side. Second, the purveyors of such technology will also sell it on the argument that such machines can be programmed to be even more discriminating in engaging the enemy and thereby reduce civilian casualties. A more humane means of warfare. Imagine a killing machine which can walk, run, even fly, and operate day and night, with keen sight, smell and electronic senses, alone or in coordination with other such machines. Less sophisticated countries, without the money and ability to deploy armies of such machines, would feel even more imperiled since their enemy can now conduct war without fearing the political backlash which comes with tens of thousands of dead soldiers. The only ready means the less-monied party would be able to resort to would be nuclear weapons deliverable to the super-power’s major cities.
Brice C. Showell (Philadelphia)
I do not think tanks, bombs or other military technology was tried in Nuremburg.
manfred m (Bolivia)
The use of 'killer robots' set loose by automation, artificial intelligence, and even 'thoughts' developed on their own, represent our human malicious side, tossing aside all prudence (doing what's right, however difficult and/or hazardous), a Machiavellian hell where the end justifies the means. This is a slippery slope just waiting to be unleashed by some sadist mind, even when the perception of danger is wholly invented for ulterior motives. Remember the sick mind of George W. Bush's vice, Dick Cheney, who put the rationale of a 2% chance of an attack to justify our annihilating the 'enemy' (in his case the wrong country, Irak, by cheating on the U.S.'s people about the existence of WMD) under false pretenses? Good and evil are within each and every one of us; just do not tempt us, especially when in a position to decide what to do with unearned power 'a la Trump', and abuse it. Even the current use of drones, killer machines orchestrated from California to hit targets in distant lands, in the comfort of a 'lazy chair', sounds ludicrous, and accounting for untold number of innocent civilians killed out of the blue. And no guilty party, just the satisfaction of a job well done, destroying the intended target...however brittle the justification, or common mistakes compounding the injustice.
Steve Bolger (New York City)
It turns out that even remote-control killer-drone pilots get combat fatigue.
dve commenter (calif)
"and fail to set the appropriate terms and boundaries of the debate from the beginning, we risk searching in the fundamentally wrong places for killer robots and the means to mitigate their pernicious expression...." Sounds like the same problem as artists and sexual harassment of models.
db (Baltimore)
It's important to understand that while Machine Learning systems seem to exhibit behaviors comparable to thinking, upon closer examination, it becomes clear that the way that these deep neural networks think in fundamentally different ways. They are easily fooled in ways that any person could not be [See 1, among many examples], implicitly learn biases by racial/socioeconomic features which could be harder to detect and without human can and would violate civil liberties, and they have no ability to question themselves and "reflect" further. For further discussion of the mindless and risk-fraught proliferation of artificial intelligence, please see Chelsea Manning's exceptional Op-Ed from last September [2]. The point is that while they are capable of performing tasks, there are many dangers and there are countless aspects we don't understand and cannot prepare for ahead of time. Be careful. [1]: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Moosav... [2]: https://www.nytimes.com/2017/09/13/opinion/chelsea-manning-big-data-dyst...
Steve Bolger (New York City)
No neural network has ever demonstrated a capacity to explain how it sorted out some set of repeating coincidences from piles of data.
Earthling (Pacific Northwest)
What is wrong with men that instead of building and creating and making a world where no child is deprived, where no child has to live with the trauma of war, their resources, efforts and intelligence are put to making bigger and better weapons and machines of death and destruction? What sort of ugly species are these humans, and especially, the male half of the species that glorifies and revels in war and death destruction and killing machines?
Steve Bolger (New York City)
That's what the pursuit of happiness must be for anyone to vote for these folks.
John (Washington)
“The terms in which we frame this debate are crucial: If we fail to understand the problem correctly, and fail to set the appropriate terms and boundaries of the debate from the beginning, we risk searching in the fundamentally wrong places for killer robots and the means to mitigate their pernicious expression.” I agree. As our technology advances we expect to be able to reduce unnecessary non-combatant casualties. In some areas we have made a lot progress, a smart bomb can be much more effective than a B-52 strike with unguided bombs, but in others, like with land mines, we still justify and tolerate the non-combatant casualties. Small arms are supposed to use ammunition with full metal jackets in order to minimize massive damage, but we create weapons where similar types of damage can occur due to the round tumbling in a human. What is ethical on the battlefield appears to be established by consensus more than rigorous discussion. The summary statement is also applicable in another aspect of our society, which is firearm violence. Is the gun the problem or the people wielding the gun? Gun control advocates seem to primarily promote supply side solutions, while gun rights advocates seem to address demand side solutions. It may turn out to be the same regarding autonomous weapons.
wcdevins (PA)
Gun rights advocates favor no solutions to firearm violence.
Terry Malouf (Boulder, CO)
Dr. Robillard, I agree completely that the moral dilemma quickly approaching around "slaughter-bots" is a human problem, not a technical one. Just as with suicide bombers, it only takes a very small minority of persons to wreak widespread havoc even if the (counter) super-majority is opposed to such actions. I could go into the philosophical implications of why humans act this way, but that's an extended discourse. Rather, thinking about technical solutions ("least-bad outcomes"), what defensive capabilities should we be developing against terrorist groups as well as nation-states? Accepting that technological advances are almost always applicable for both good and evil purposes, I actually find the recent news of directed-energy weapons (high power lasers) to be a glimmer of hope in a sea of despair: It represents a potentially cost-effective bulwark against slaughter-bots, and is oriented mainly towards defensive rather than offensive capability (at least for now).
Steve Bolger (New York City)
Military ethics? How does one not do to the other side what one does not want done to oneself in combat?
JeffB (Plano, Tx)
With killer robots, it will be much easier to enter into war due to the assumption that there being fewer human military casualties and therefore less public resistance. We have already seen this dynamic play out with the removal of the draft and our continued involvement in foreign wars that could never have been sustained this long if we still had the draft.
wcdevins (PA)
The doctor who invented the Gatling gun thought it's destructive power would make war unthinkable. But we have thought that unthinkable again and again, from the machine gun to the atomic bomb, haven't we?
Teele (Boston ma)
The author argues that groups of humans could employ the same behavior and decision-making model as a putative group of 'killer robots', so therefore the robots are not the moral problem, but rather its the operational model. This is true in only the most trivial sense. Robots are 'suicide machines' that are completely expendable and can be manufactured in vast quantities. Equating the tactical possibilities of such a technology with teams of living, in-short-supply humans is ingenuous at best.
Ron (Vermont)
AI at present is very stupid, even when it can "learn". Humans ordered to kill everyone in a town may refuse to do so. Machines will not. We don't yet have the ability to include rules and judgements of that nature in killer robots. So the current limitations of AI is one of the problems with autonomous weapons. It's fairly easy to set up an autonomous machine gun that fires on anyone entering an area; it's much more difficult to have the controlling AI distinguish between an armed enemy and a child who wanders by. Concentration of power is a problem also; once built, killer robots could potentially be controlled by one person instead of a chain of command or control could be stolen. This would be similar to easy access to nuclear weapons, mass destruction at the whim of a single person. Control of large scale or long persistence autonomous weapons needs to be handled as carefully as nuclear weapons. Mistakes have been made with nuclear weapons and so far none have detonated. Mistakes made with an autonomous army may be easier, with fewer failsafes, and could happen spontaneously; look at the recent Spectre and Meltdown computer security problems. Autonomous weapons contain computer hardware and software, and are complex systems that will contain bugs. An army of people could be fooled into carrying out the wrong mission for a while, but an army of compromised autonomous weapons could carry out the wrong mission over and over forever.
Bert (PA)
This article glosses over two points. One is speed. When battles played out on a time-scale of weeks, days, or even hours it was feasible for human commanders to control the result. It was possible to detect and countermand a mistake. Indeed we are horrified by the cases where that failed to happen. This is the central point of the movie "Dr. Strangelove". But with autonomous weapons, battles will play out in minutes, seconds, or even milliseconds. On those time-scales there is not even a possibility of human control. Look at stock trading today. The other point is predictability. We can pretty well understand how simple systems like land mines and the Harpy missile will behave. When it comes to machine learning, we will never understand, can never have confidence that the machines will behave as we "designed" them to. When they run amok we will not exactly be guilty of murder, but we will certainly be guilty of negligent homicide.
William M King (Lafayette, CO)
There is an episode of the TV series Star Trek: The Next Generation where the Enterprise arrives at a planet devoid of human life. It turns out that what has survived is a planet wide weapons system that has the capacities being discussed. Picard concludes that the machines learned all too well what they had been programmed to learn and wound up disposing their makers.
Steve Bolger (New York City)
Why bother to invent a self-programming machine that develops its own software by reacting emotionally to its existence when humans already exist?
Questioner (Massachusetts)
There is a video published on YouTube showing a use-case scenario for automous weapons: https://www.youtube.com/watch?v=9CO6M2HsoIA&t=&ab_channel=StopAu... At the end of the clip is a plea by Stuart Russel, an AI expert at UC Berkeley, who says the window is closing fast to prevent a disastrous future with autonomous killing machines. Mr Russel pointed to http://autonomousweapons.org to learn more. In my opinion, development of these sorts of weapons are like the nuclear arms race of the 50s and 60s. As the video portends, imagine unleashing a million little homicidal raptors, targeted a specific personal profiles. They could produce mass death, like a nuke; but selective death. Potentially, any person or group. They could target, say, all the registered Democrats, or Republicans, or people who post about specific things on Facebook. And unlike nukes, these sorts of weapons are cheap, being produced from off-the-shelf technologies. You don’t need a defence budget to design them, or deploy them. And it needn’t be a country that uses them. It could be anyone, for any reason. Countries and groups vying for global impact are going develop them, in spite of bans.
Rudy Flameng (Brussels, Belgium)
First of all, I find the term "military ethicist" mildly repulsive. It suggests that there are circiumstances in which war would be "good" and that is a slippery slope. Unless we start from the position that war is always bad, and can only be excused if arrived at after the exhasution of all other possibilities, we open the door to a sophistic reasoning that will bring us, indeed has brough us, "shock and awe" and the utter destruction of stability in the Fertile Crescent. Secondly, I take exception to the situation of the danger of autonomous weapon systems at the hands of "despots and terrorists" and as "hacked to behave in undesirable ways". These weapons are most often being developed, and will continue to be so, by countries such as the USA. Its technological edge is such that it is the prime potential user / deployer of these systems, and it is disingenious to pretend otherwise. The very big risk, and indeed the main selling point, of these systems is that hey would insulate their users from the consequences of their use. Such weapons lower the human cost of warfare for the side that has them, and thereby the treshold of fear and pain that is, ultimately, the main brake on war. Already today, we see that the abandon with which the US deploys drones in Yemen and Afghanistan is wholly counterproductive. But no matter, all they will temporarily achieve will be undone, too. History proves that beyong any doubt.
JWH (.)
Robillard: "They are instead concerned about weapons systems of greater technical sophistication that would be used to target humans." That is an easily rebutted slippery-slope argument. Humans can likewise use "lethal autonomous weapons" to defend themselves. That is obvious from the history of warfare. Tanks were initially used against infantry, but they were quickly used against other tanks. Indeed, there have been modified tanks called "tank destroyers". Further, there are a variety of other anti-tank weapons, such as anti-tank mines, shoulder-fired missiles, and helicopters armed with anti-tank missiles. A similar narrative can be constructed for the use of aircraft in warfare. Robillard: "If we fail to understand the problem correctly, ..." While the author makes some excellent points, I would add that the use of weapons against *unarmed civilians" should be the concern here, not the technology employed. 2018-01-29 18:42:02 UTC
Daniel12 (Wash d.c.)
Killer robots, lethal autonomous weapons systems, or more broadly, A.I. gaining consciousness, and potentially posing problems for humanity, not to mention becoming THE lethal autonomous weapon system? It's a bad enough sign for humanity to be creating lethal autonomous weapons systems which do not possess consciousness, are not a strongly developed A.I. system, but to simultaneously attempt conscious A.I. and with no clear grasp of human behavioral tendencies not to mention theory of consciousness, well that seems just blindly striking out in the dark. It really disturbs me humans attempt conscious A.I. with no clear grasp of consciousness development. My understanding of consciousness so far is that it is physicochemical, and of course in humans a biological manifestation. But if a machine were to be ramped up powerfully, leap the human brain, I wonder, because it is based of course on physicochemical materials, and most probably will exploit the quantum, if it can link up by various methods with all the matter around it and beyond, which is to say it would not be conscious in the contained form like we think we are (brain in body) but conscious in spread out over matter and space sense, literally all around us and affecting us in myriad ways. We don't even know clearly our own consciousness, and physicists even speculate that bits of consciousness are in matter all around us in pantheistic sense, so I wonder if a machine becoming conscious would not exploit everything.
mlane (norfolk VA)
Autonomous weapons systems allow us to kill more people without putting ourselves in harms way. What imperial power wouldn't want to find a way to justify this kind of thing. The author of the article avoids this issue altogether. It's much easier to justify military action, whatever the cause, if no one in your military is put at risk during that action. It doesn't matter how many people you kill at the other end so long as you never see what happens. Our current drone wars in Asia that kill and maim civilians on a regular basis are a good example of this effect.
Danny P (Warrensburg)
I think Robillard knows this already but it doesn't come through in the article. His writing makes it sound like there is some kind of murky confusion as to what is the moral harm of autonomous weapons, but really its that there are several moral harms interconnected in this issue. 1) Mass production enables a new scale of warfare. We are already surprised at our own capacity to destroy the planet, do we really want to increase the scale of conflict further? 2) The "off-switch" he mentions is concurrent: if there is free will then at any moment a soldier can refuse an unlawful or immoral order. Autonomous machines may execute the routine without this free will, 3)sometimes people are willing to let someone else dirty their hands but refuse to dirty their own. An autonomous weapon can give us that catharsis by creating a situation in which no one bears clear moral responsibility, which we should absolutely avoid. 4) We are more willing to risk machinery than life to attack an enemy. Increasingly autonomous weaponry creates a clear increasing temptation for offensive conflict. 5) Should/when a singularity occurs, where an AI gains moral agency, will the military properly recognize such and adjust their risk calculations accordingly? (Almost certainly not due to the disruption in military planning). It's not that the issue is murky, its that these different dimensions don't necessarily lead to one clear ethical conclusion. Value systems compete: that's normal ethics.
Richard Luettgen (New Jersey)
I guess I have three reactions: 1. The ethical implications of "autonomous weapons" of any true sophistication are horrendous. 2. A great deal of civilian innovation is driven by military development -- specifically weapons development -- and giving that up or severely hamstringing the derivative linkages in the field of robotics in a likely outcome that should give us pause, particularly in a highly competitive world where SOMEONE likely will develop and refine such automatons in an attempt to gain asymmetric military advantages. 3. Hey, they could be useful with liberals.
Danny P (Warrensburg)
Is that really a "send the killer robots after liberals" joke?
Richard Luettgen (New Jersey)
Danny: Yup. Laugh away (with some unease).
Danny P (Warrensburg)
I'm just going to stick with the unease of how callously conservatives speak about purging the world of their political opposition, but thanks.
Dave Sullilvan (Annapolis MD)
As a software system designer who started in the sixties migrating manual processes to computers I can appreciate the author's point: the instructions guiding a robot determine its actions; and if the same instructions are followed mindlessly by human actors, the robot's only criticism is that it does it faster and more reliably. If we do not want perfect reliability - that is, we want humans to diverge from their "programming" as some kind of fail safe mechanism, these departures are unpredictable in humans but could be reliably programmed in robots. The problem, then, is that no one is responsible for defining these exceptions for either human or robot systems and establishing and enforcing their precedence in the overall process of war.
mlane (norfolk VA)
A robot can't be "responsible" in war. It can't be held accountable for its actions or put on trial for war crimes. Even the programmers might not be held responsible.
sophia (bangor, maine)
I'm so glad I'm 66 and probably won't be around to see these LAWS (and 'it' can kill you if you disobey a one!) used around the world. Duterte would love to have a few of these babies for his drug war. Xi wouldn't mind a few, and Trump definitely wants a whole big bunch, the most the biggest the best MAGA! So when Climate Change puts millions upon millions of people on the move to survive and borders are overrun, when robots replace human work and the globalists leave more and more behind for the benefit of a very few......governments will want these killing machines that can learn. Yes, indeedy, I'm glad I won't be around. The human species is in decline. We were here for a nanosecond. And we think we're so smart. Fire ants are smarter than us. They pull together to survive disasters. Humans create their own disasters and then fight for resources (see Puerto Rico). These things really should be stopped. But they will not.
Daniel12 (Wash d.c.)
Killer robots--lethal autonomous weapons systems? The state of the world, vast and increasing population, pressure on environment/resources, the majority of the human population not pulling its intellectual weight (not enough I.Q., behaviorally questionable, mired in this or that cultural convention), makes not only weapons systems of various sorts along lines of giving relatively few humans (the elite in society, power in society, the more intellectually capable) increased advantage, to make up for lack of manpower, inevitable, we can expect "the enemy" to be defined by elites in societies as their own underclass not to mention underclass in other societies--which is to say elites across the tops of societies will increasingly be in sympathy to each other and make up for their collective lack of elite manpower by weapons systems by which they can stave off and even eradicate the rest of humanity. Most of the human population is just not cutting it today in the world. Technology and education is not reforming and uplifting humanity in an optimism inducing fashion. Rather we have runaway population and endless struggle of containment, of fighting endless battles to somehow get people to behave, to think, to demonstrate intelligent initiative, to not just be a breeding mass along lines of raw instinctual demand. The situation reminds me of the Matt Damon/Jodie Foster film Elysium, except there are no evidently bad people, only tragic, incapable, helpless humanity.
s einstein (Jerusalem)
A "killer" article which stimulates much thinking, internal questing, and questioning, with ourselves, and others whom one knows, as well as with strangers who cross our pathways, about the choices which each of US make, daily, in our various nuanced roles- interactions between inner selves and our many external social selves- in a range of environments and networks. What is "ethical" choice(s) in a daily, enabled, violating WE-THEY ummenschlich culture, and world? Who,and where, are the seeds of an US, engaged for equitable well being, in a toxic, binary WE-THEY weltanschauung? "Personal responsibility,"a word, term, concept, process, outcome, and more, is not noted in this challenging article, central as its meanings are to what is being explored.Consider: can "killer" anything be planned, carried out, assessed, made even more effective/efficient, ceased, without such "PR" existing, or being "disappeared?" How does, can, is, societally-spawned, mutually-enabled, human-killers and nonhuman killers, co-exist with societally created, and sustained, mutual trust, mutual respect, care, civility, and help, when needed.All basic ingredients to the DNA of menschlichkeit. Paradoxically, during the 6 Days of Creation, no "killer" anythings, nor menschlichkeit, were created! Humans are great innovators.
Eli (Tiny Town)
The main reason I’m not more worried about killer robots is that they’re only as effective as the amount of ordinance they carry. We should be more concerned about the size and power of the ordinance then about some rogue AI. A bunker buster bomb, no matter if a human or robot is in control, is capable of doing much more damage then a rogue drone, or a ‘smart gun’ that malfunctions.
Marc (Vermont)
I think you are saying that it is not autonomous weapons that kill people, but people who kill people. Or more generally, the people who build guns that kill people, kill people.
vulcanalex (Tennessee)
No ban will be effective, just as say banning any robots that replace human workers.
Stephen Hoffman (Harlem)
Like all other human activities, war has two essential faces: an intrinsic and a utilitarian one. By the one standard, war is a duel of honor: a test of courage and a willingness to hazard life by putting it in the scales of chance or providence, assessing its worth. The other makes it akin to strip mining or any other form of economic exploitation—merely a means to an end. One point of view attempts to grasp life to the full and live it all at once. The other defers life through an endless series of intermediary operations and mechanical substitutions, as something that can be lived tomorrow. War “improved” by artifice and procedural embellishments is a negation of life—but no more so than the other indignities and mechanical substitutes for living to which we submit ourselves daily in our ordinary lives in the name of utility or progress.
vulcanalex (Tennessee)
War is about none of this, it is violence designed to either get power or stop others from getting it.
mlane (norfolk VA)
War is strip mining period. There is nothing honorable about the process of warfare. Behavior of individuals within its contexts might be. War is never improved and is always a negation of life with or without autonomous drones
Stephen Hoffman (Harlem)
Oops, my mistake. I guess I read too much Homer's "Iliad," not enough Marvel Comics—you know, Arch-villains vs. the Justice League and that sort of thing. Fifty years from now Ft. Lauderdale will be under water—destroyed without a shot being fired, and no superheroes or robot armaments to save us from foreign villains. Like Pogo said, we have met the enemy and he is us.