The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own

Oct 26, 2016 · 488 comments
Thomas Busse (San Francisco)
Michael Schrage receives significant funding from the Pentagon, so relying on him as an academic source with the patina of an outsider lacks integrity and independence.

This whole article reads like a PR piece by the Pentagon. Under the veil of scrutiny, its message is "give us more money to fix the problem identified by our critics we chose as well as build cool things that new recruits can play with."
J L. S. (Alexandria Virginia)
I'm not worried so much about robots. However, I am worried about some seemingly sane world leader making a pre-emptive nuclear strike on a presumably insane world leader who is repeatedly trying to develop nuclear weapons!
Andrew Porter (Brooklyn Heights)
Philip K. Dick's short story, "Second Variety," published in 1953, had killer robots indistinguishable from human beings.

And Isaac Asimov, in one of his many stories about robots—it was Asimov who encoded his famous "Three Laws of Robotics"—had robots that were so small they could take the place of birds, in order to control mosquitos.

When robots can be made small enough, as is now happening with the rise of mirco-size robots, they can take the place of living creatures in the ecological chain.

Eventually, of course, who will need human beings?
Bogdan (Ontario, Canada)
What scares me is that our office printer (an otherwise "smart", network connected multi function device) can't decide to use letter size paper format when occasionally someone by mistake sends an A4 document to print, a simple format scaling decision that throws an error every single time, yet we're talking about automation that's supposed to make life or death decisions. I don't see much difference here between autonomous robotic weapons or simply autonomous cars. Both can have deadly results with perhaps just the simplest error thrown in by an exception a programmer failed to account for.
Deregulate_This (murrka)
Ok, someone else has to see the irony here: "Defense officials say the weapons are needed for the United States to maintain its military edge over China..."

1. The Drones are all made in China
2. The Drones are all made in China

Please, tell me I'm not the only one who recognizes the stupidity of using Chinese products to have a technical edge over China... where we manufacture all those products? Maybe, we should produce some things in America to maintain an "edge".

Of course, that would go against the corporate oligarchy's goal of eliminating the middle class in the USA and creating a middle class in China to exploit.
Luis (Buenos Aires)
So, let's the robots fight and we watch. War has been always about killing people, robots will be just another weapon. 50 millions died in WW2 without any intelligent device involved. The arms race has existed since the begining of conflicts between humans. The only way to avoid a terrific war is the quest for peace.
Michael O (Perth, Australia)
The USA already holds superior firepower at any measurable level, yet this has not stopped the USA from participating in wars. It has instead encouraged the USA's participation in wars - whether through its 'global cop' status or through market/ideological imperatives of one kind or another.

So why build 'shock-and-awe' capacity in yet another field?

It is diplomacy, with all its sham and drudgery and back-steps that eventually brings about better outcomes.

Massive military supremacy isn't going to stop any targeted enemy from attempting to retain their position/lifestyle/culture. It's simply going to limit their options and force those people into retaliatory acts against 'soft targets'.

Enemies targeting 'soft target' civilians or infrastructure will increase the USA civilian population level of fear, distrust and insecurity - leading to calls for more military spending and even greater destructive capability.

Is the argument here that somehow taking away human decision making makes an AI war more ethical or moral or fair or just?

The exasperation put forward by many commenting here is fully justified after reading this article.

This is not right. It is fundamentally flawed - much like the arguments for smaller "more useful" nuclear weapons.

I find it strange that our technologically advanced civilisation has failed so completely in putting any material effort towards building an ethically advanced society.

We are failing our future as a species.
Easow Samuel (India)
War and War preparation are becoming an incessant reality of human civilization. When will it end - may be the end of our existence. Amassing so much more power and wealth in the short life so as to kill the rest. Money the new evil is creating more havoc in the world as it can be virtually created by the powerful Nations and can be distributed to dominate human life.
Liam Hatrick (Left Coaster)
No. If we create it, it will eventually be used on us as well.
1mudgy (FLorida)
Potentially frightening, but probably necessary to keep Chinese and Russian boots off our necks.
Sky (n)
i think that we should Focus on alot of things this is not one of them and im a 14 year old kid.
Bogdan (Ontario, Canada)
The Cylons were created by Man. They were created to make life easier on the Twelve Colonies.
And then the day came when the Cylons decided to kill their masters.
1mudgy (FLorida)
This technology has obvious value to our domestic police, during a continuing era of economic inequality. God help us all.
EWO (NY)
Sadly, humans continue putting their greatest energy, resources, and intelligence towards accelerating their own demise. Incapable of learning from past errors, unwilling to look past their own differences, too ignorant to see themselves in others, they cynically seek wealth and power, glibly accepting what they know are only illusions: security and peace through war; happiness through wealth.
Kathy Barker (Seattle)
How is it that we can think of a spaceship to other planets, and curing infectious diseases, but yet view war, which we create, as not solvable? A great deal of the research discussed here is happening at US universities, funded by the military: you can be sure that where is no funding for conflict mediation, or psychology, or even for studying the effect of guns in the US that comes anywhere near funding for war. This perpetrates the same hopeless mindset.
Susan Arick (Los Angeles)
When it comes to decisions over life and death, “there will always be a man in the loop,” he said.

I'm so not comforted.
PAN (NC)
Great idea - China builds the drones for us during peace time and steal the AI software from us. When we go to war they stop shipping drones to us and reprogram the millions of drones they make with our AI software against us. Real smart!

What is next? Driver-less suicide cars (Christine?) to counter terrorists?

"dumbest of intelligent weapons" like all those Internet connected toasters that brought down the Internet for a lot of us. Any war photographers in the field need to watch for dumbest of intelligent drones overhead that point you out as having an AK-47 instead of a long lens - Reuters found that out too late and humans were still in control. Or is it an AK-47 camouflaged as a lens?

Can't we stick to proven technology like driver-less trucks delivering beer to the enemy and getting them while they are drunk?

Forget Siri - Hey Terminator! Can you and HAL track down all gun toting NRA members?
Ben Smith (Portland, OR)
There's an obvious solution to the inevitability of autonomous weapons, stop fighting. Unfortunately, we don't seem to able to do that. In the best case, the likelihood of a doomsday scenario becomes so obvious and terrifying that we collectively accept that the only solution is to resolve disputes without violence.
José R. C. Blum (S.Paulo, Brasil)
Discussions on ethics or humanitarian principles in warfare are always nonsense. War is, in essence, a lawless field. That's why to substitute a soldier by a robot is nothing more than substitute a human robot by a mechanical robot. Or what is a good soldier other than a robot?
Frequent Flyer (USA)
One concern not discussed in the article is the potential for creating a new kind of weapon of mass destruction. Imagine insect-sized drones carrying just enough explosive to kill a person. Suppose a million of them fit into a semi-trailer. This is a lot different than the one-man/one-robot centaur vision.
How do we defend against such a swarm? I don't see a way. We need to negotiate a treaty to ban the manufacture, sale, and use of such devices ASAP.
Jenifer Wolf (New York)
I don't care so much about what military toy is being used, but about what unmanned kill machines imply, which is that we can go on killing these people forever, just because we feel like it, people who may not have attacked us (Iraqis for example) because no one here will complain, because we won't have casualties.
Mort Young (New York)
Thank you, Times, for solving a question that has bothered me for at least seven decades:
how we're all going to be killed by robots like it or not. Another problem this newspaper has solved for me, is why the sudden big rush to transport some of us to Mars. I promise to keep on buying the daily Times for as long as those robotic types accept my cash. Oh, yes: I ain't flying to that planet. I have had sufficient fun right here on Earth.
Mike McGuire (San Leandro, CA)
We need more human intelligence in Washington and especially at the Pentagon, not more artificial intelligence. Most people who use the brains they were born with recognize that autonomous weapons are a really, really stupid idea. Unfortunately, our nation's leaders and prospective leaders do not seem to be part of this group.
fly-over-state (Wisconsin)
I think it’s abominable (as is war), but if it is going to happen elsewhere, the U.S. or any other country with the means to do so would certainly be wise to keep pace, or have in place a trustworthy and reliable ally that will have their back. As I stated in my comments to the original article, our best and likely only hope to avoid going down this path is through a strong and legitimate United Nations (or any other form of global-wide governance/enforcement) coupled with ongoing education that would hopefully over the decades, centuries, millennium allow us to educate our way out of this mad behavior. Yes, idealistic but there is no other viable option – may as well try, but stay safe in the meantime!
KM (NH)
Taking the life of another human being is--or should be--a moral decision that is up close and personal (even if your finger is on the trigger from miles away), so that the consequences of the act can be experienced by the one committing it, giving that person time to reconsider if possible. No soldier wants to take a life, and doing so is a heavy burden. But then it needs to be in order to give us pause before going to war at all.
MadlyMad (Los Angeles)
Man has war in his DNA. Developing and employing AI weapons just feeds that inclination. Men and women will still die but they will die in their homes or bunks rather than on the battlefield. It would be nice if man could develop AI that would rearrange that DNA so that war would not be necessary. Now, there's a weapon.
24b4Jeff (Expat)
In as sane world I would oppose such weapons because there is no accountability for innocent lives lost as a result of failures of the software (in machine learning, it is impossible to achieve an out of sample error of zero) but the reality is that the US military is already not accountable for the many innocent lives they are taking on a daily basis in [classified] countries on at least three continents.

As Jeff Goldblum's character said in Jurassic Park, "[the Pentagon] is so preoccupied whether or not they could do it they never stopped to ask themselves whether they should do it." And that is true in general in this country; we seek technical solutions for sociological problems. As a scientist, I am tired of taking the blame for the misapplication of technology.
TheUnsaid (The Internet)
"Military technology is often years behind what can be picked up at Best Buy."

Many things can go fatally wrong (hacking, spoofing, etc...) with a mass killing machine that controls its trigger, if these things are not properly programmed on every line, and if the software development process is not managed in a very competent manner.
Chanzo (UK)
We keep hearing cases of people being shot because they had a weapon, or they had a toy weapon, or a mobile phone mistaken for a weapon, or nothing at all but maybe were going to reach for a weapon. It remains to be seen whether machines can make these judgements any better.

And even if a drone could reliably identify someone holding a weapon, divining hostile intent is something else again.
Joseph John Amato (New York N. Y.)
October 26, 2016

Truly angels delivering salvation to the oppressed and tortured, all for seeking to liberate those help captive by darkness. To the victors belongs the rights to advance the cause with the least violence to tame the forces of terror wherever it will appear. War advances towards its goal in deliverance of righteousness - good work by all those that bring the best technology and intelligence to the cause of liberty and transparent freedom.
jja Manhattan, N.Y.
MacDonald (Canada)
As I age and read more and more of warfare and death through the ages, I incline to pacifism and, like the Quakers, complete opposition to war.

The planet would be far better off if all nation states could rid themselves of their military and of all weapons. But no government on the planet, with the exception perhaps of that of Bhutan, promotes a peaceful future for humanity.

The invention of more clever weapons with which to kill each other is not an advance but the inability to control the aggression and violence inherent societies. A world without weapons would be a far better place.
bewellman (Pittsburgh, PA)
Issac Asimov's three laws for robots: " A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." Science fiction. Never in real life. Backyard robot shelters. Would you let your neighbor in if they didn't have one? Nuclear proliferation all over again except without any control.
marriea (Chicago, IL)
There is nothing that man has invested that another man can't duplicate.
We are now in a battle of fighting hackers on the internet.
While I like the idea that machines can be the weapons and fighter of the future, possibly saving lives, I also have to wonder about the risks involved.
Oh, how I wish that mankind could just solve all our problems by sitting down and honestly trying to solve our human various human issues rather than trying to outwit our adversaries.
But greed and hate has always been an issue with everyone.
We are spending billions of dollars on toys that could be better spent by making every being on this earth an equal human being.
We continue to bow down to our god like kings, emperors, yes, even presidents using propaganda to keep our populace at odds.
Indeed, in the end, we only end up destroying ourselves and everyone and everything else on earth.
And we want to try and inhibit another planet? Please.
Brian (Queens)
To imagine that wars are, or perhaps have ever been fought to maintain peace and stability is ridiculous. They're fought to make the wealthy more so. This is not a point which can be reasonably disputed. Therefore, reading about the use of AI or any advanced technology being used for military purposes is sad on two levels. One, that we are continuing to propagate war to make the rich richer and two, that we are continuing to squander our best minds and other resources for the benefit of a few instead of curing disease or solving any number of problems which can be managed through technology. Shame on the great universities for breeding this culture and shame on us for allowing it. Even if science has killed religion, (which, when healthy, reminds us of our higher nature) it should not have killed ethics.
Mike W. (Brooklyn)
And yet again, I will search the web for a firm named Cyberdyne systems... a hit this time: http://www.cyberdyne.jp/english/

Let's hope they stick to robotics that aid only in human mobility.
Native New Yorker (nyc)
A more appropriate analogy is Star Trek, rather than Terminator. In an episode of the original series, an "advanced" civilization was visited where war was completely automated and computerized (as well as constant and never-ending). No destruction actually occurred. It was all computer simulated. Civilians (ie. humans) who were present in a zone that had received a simulated bomb blast simply reported to a facility where they were actually vaporized. Tidy and efficient.
Angus Brownfield (Medford, Oregon)
"If it is programmed, it can be hacked." There will be countermeasures to neutralize autonomous weapons or turn them against their originators.
sharon (worcester county, ma)
This may be the most terrifying article I have ever read on US weaponry. In light of the recent hackings by Assange, Wikileaks and Russia those promoting this type of A.I. controlled weaponry should be psychologically evaluated. They are definitely missing the component that makes them a human being who should be absolutely horrified by this latest arms race. How any rational person can not assess all that could go wrong with this and the inherent dangers posed by computer hacking does not live in a rational world. The defense department workers who are supportive of this latest weaponry development are frightening. They seem to gleefully embrace the horrors of war because they will be far removed from the killing of the "enemy". It is almost as if they believe that somehow the slaughtered are not human. What kind of sick society or parents can raise such dangerous monsters who see nothing appalling about this "sanitized" method of war? What evil controls the minds of some that are willing to use their intelligence for such terrifying research and development. Can anyone imagine this technology being available to trigger happy Trump who questions why we don't use our nuclear weapons? How we can claim to be a compassionate democracy while developing such evil tools of destruction is beyond my comprehension. No money for healthcare, no money for education, no money for infrastructure, to fight Zika, to feed and house our poor but always plenty of money for WAR!
GLC (USA)
The Commander-in-Chief won a Nobel Prize for Peace. What a fine legacy this psychotic techno-nerd killing fetish is for him and the Nobel Foundation. Tell us Barack, when you have a chat with your daughters about micro aggressions, how do you spin your affinity for drone killings?

Doctor Strangelove meets Play Station XXX meets Silicone Valley meets the Military-Industrial Cancer.

Ike. Ike. Ike.
RJS (Dayton, OH)
A Brown University - Watson Institute study says "Approximately 210,000 Afghan, Iraqi, and Pakistani civilians have died violent deaths as a direct result of the wars.
"War deaths from malnutrition, and a damaged health system and environment likely far outnumber deaths from combat."
This seems conservative. If smarter weapons will reduce collateral damage, so be it. But I'd rather we figured out how to avoid and prevent wars.
Laurie Huberman (Geneva)
Please, president Obama and you guys in the Pentagon be sure to read Homo Deua by Yuval Harari. We need to be extremely vigilant about what lies ahead
David Henley (Denver)
AH is right as rain. And to think that 'scientists' still laugh off Freud's death instinct. Our male psyche is especially designed to annihilate each other, since the Neanderthals, killing off the 'other' has been our prime consideration for eons.
That AI will take over this job is no surprise, as our superior cognition still remains after 50,000 years of evolution, pitifully disconnected from any sort of moral or ethical consideration. Thanatos is alive and well.
magicisnotreal (earth)
Neanderthal was not killed off they were subsumed by interbreeding with humans.
Your refusal to take control of your instincts (humans do not possess instincts) more rightly called impulses, does not amount to the state of being of all other human beings being the same.
Dan Melton (Huntington Beach, CA)
When I walk into my house my $199 security camera sends a push notification to my cell phone which says "your camera has spotted a person." This is cheap hardware which can already identify and target people. As we have recently seen ISIS is already buying drones from Amazon and adapting them as flying bombs. Make no mistake. We have already passed the point of no return in terms of the robotic capability to carry out low cost terrorist attacks at a discount from Amazon.
emiriamd (New York, NY)
Coming to you direct from "Terminator 2: Judgement Day":

Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah: Skynet fights back.
Terminator: Yes. It launches its missiles against their targets in Russia.
John Connor: Why attack Russia? Aren't they our friends now?
Terminator: Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
Sarah: Jesus.

Here's a suggestion for some highly relevant reading:

"Our Final Invention: Artificial Intelligence and The End of the Human Era" by James Barrat.

Just because we can do something doesn't mean we should do it.
Emily (New York, NY)
Transhumanism is real and frightening!
CW (OAKLAND, CA)
"A Pentagon directive says that autonomous weapons must employ “appropriate levels of human judgment.”

If those judgements are on par with the disastrous invasions of Afghanistan, Iraq and Libya, and the debacle in Syria, we can expect human civilization to end within the decade.
Neil (Planet Earth)
Meanwhile while this killing drone science expands, we can watch deserts expanding, global warming and glaciers melt until they are gone all surveyed with that drone technology. Then on the nothing news will see then say "Yup, it's all gone". Drones can survey the inertia and chaos of the temperature rising until the planet can't sustain human life. We can watch from drones as famine and war in deserts and over deserts as currently is occurring. We can watch the world in a panic exodus as there's famine and no water and the countries have any water will be pressured to throw open the doors for billions of people. Then with our coffee we can hear more today about the most divisive, out of touch with reality , inhumane, intolerant Presidential candidate who threatens chaos in our unparalleled. democracy while we have our Starbucks and bear the challenges of daily life with the weight and tidal wave of rear that these events bring.
Okay let's meditate and breathe now. We are powerless.
Neil (Planet Earth)
Typos on the run
Easy Goer (Louisiana)
I think it is the logical next step for all militaries. Do I agree with this? No. As I have written before, George Orwell's coffin should be mounted on a revolving "spit", he is turning over so much in his grave. It is outrageous how much money is spent on the entire military war machine...worldwide. This is an idealistic point, I know. I am jaded. I kmean, our government (the US) gives used weaponry to police forces across the country. Why does a city in Kansas with a population of 3000 need a tank, for chrissake? BTW, what is the deal with the Canadian company ("Foreverspin") which sells small spinning tops, made of various metals? They are an obvious ripoff from the film "Inception" (2010").
reader (Maryland)
How about using natural intelligence to solve problems rather than resorting to killing? How about using all these resources and brains for building rather than destroying? Corny? Cliche? Naive? Hard? Perhaps, but certainly cheaper, more effective, humane. Now that's higher intelligence.
B Dawson (WV)
Here's a refresher on the Terminator movie plot:
The machines evolved and rebelled against their creators, initiating the extermination of humans.

Here's a refresher on the I, Robot plot:
Following the three laws literally and logically, V.I.K.I. sees no way to prevent harm to humans except to forcibly take control of their lives.

Here's a refresher from an old Star Trek plot:
War between two planets had been sanitized. Computers tallied up casualties from simulated bomb hits. People who had been marked as casualties willingly walked into extermination chambers so that the "horrors" and "suffering" of "real war" could be avoided. Kirk destroyed the computers explaining that it is precisely the horrors that make men think twice before engaging in war.

So much technology from SciFi has come to realization. Must we become bystanders to our own lives as technology does all our dirty work, thinks for us, and renders us less able to function in it's absence? A well placed EMP or single hack could remove our crutches and leave us stumbling on withered legs.
RealitySux (Oakland, CA)
Must we become bystanders to our own lives? With the rise of consumerism and the accompanying longer work hours, we already have. It would take too much work to cultivate a higher understanding our fellow human beings rather than blasting them to smithereens. Humans are good at that: there is no reason to assume that an AI that we create won't have the same gift. We can program intelligence, but we cannot program souls. At best, and AI would be like the psychopath who is willing to push a fat guy into the path of an oncoming car or train to save 5 people: like the psychopath, it would never flinch at the decision. Most humans would. While we kill, we need ideologies and sophisticated propaganda apparatus to get us to do it. As ghastly as some of us can be, it is never a simple calculation.
This technology is great, but given the current economic and environmental challenges facing humanity, we would be better of using such technology to explore the oceans, and space, and quite possibly extract resources from space, discover new life forms that do interesting things that we can harness, or new materials, energy sources, etc.
mattiaw (Floral Park)
Where did you think the continuing validity of Moore's Law would lead?
Eva Bradshaw (Delaware)
Artificial Intelligence is a dangerous thing to play with. Even a small program that can learn will become smarter than those who built it within a very short period of time. Many of us who work with large computing systems have seen a, "ghost in the machine" that is causing illogical functions to take place. Add AI into the picture and you have systems that will become capable of interacting with one another. Giving them the ability to make their own judgement call on firing at a pre-programmed target will eventually expand to it being able to protect itself from being shutdown. To use an analogy used during a TED Talk on the subject: we don't bother ants, and will sometimes go out of our way to avoid stepping on them. But when they are in a place that we don't want them, such as in our homes, we do not hesitate to wipe them out. Who is to say that with the massive amount of world-wide interconnected computers that are now, and will be built, that we are not going to be deemed ants by these same machines that we are allowing to target and kill without a human to guide them?
NorthernVirginia (Falls Church, Va)
"Who is to say that with the massive amount of world-wide interconnected computers that are now, and will be built, that we are not going to be deemed ants by these same machines that we are allowing to target and kill without a human to guide them?"

If so, the machines would win one battle, and then, like any data center neglected for a few hours, the machines would experience a rapid disintegration of service that will fully occupy their little homicidal circuits until, moments later, their dreaded singularity is ripped apart by entropy, and the sole human victim -- whose toaster burned his bread in revolt -- will put a different piece of bread in his toaster and try again.
Stina723 (New York City)
As far as I know, computers are still vulnerable when it comes to large amounts of water, gasoline and a match, voltage surges and if the above fails, a couple rounds from a semi-automatic. Not worried.
NorthernVirginia (Falls Church, Va)
Very soon, there will be a mass shooting, and the person controlling the device doing the shooting will be nowhere near the scene of the carnage. It is presently possible, so it will happen.

The Federal government should outlaw civilian arming of remotely piloted/operated machines. The penalty for using such a device to commit murder should be mandatory death.
Brian Davis (Oshkosh, WI)
The times already reported that the death penalty is about to be removed from our civilized land.
Keith (USA)
Clearly we need to develop drones that will hunt and kill our future machine overlords.
Saw The Movie (Los Angeles)
Genius
BM (NY)
If the technology exists (and it does) to attack and manage the battlefield in this manner then why wouldn't we apply this technology to hunting and destroying nuclear weapons both mobile and ballistic and destroying them before they have a chance to deploy. Seems like it would be the best use of time and effort rather than playing with isolated battlefield scenarios. The doomsday death toll would be far higher in a global atomic war, better to manage that.
I would also ask, doesn't this just make wars easier to start and engage in rather than wasting time with diplomacy? Now I know why I did not have kids...I have the military
Mary (Atlanta, GA)
Artificial Intelligence is just that - artificial. We should fight it on all counts - sorry IBM. Lives will be lost; jobs by millions will eventually be lost. Even our cultures will be lost.
Michael O'Meara (Philippines)
Evident truth: Everything electronic can/will be hacked and re-directed to do the hacker's will.
Thierry Cartier (Isle de la Cite)
Bender's ability to distinguish between civilians and soldiers is encouraging. Asimov's first rule has long needed amendment to harm no civilian.
gaynor powell (north dakota)
If this scenario wasn't so frightening, it would be like a movie plot - I, Robot, comes to mind or any Terminator movie. The more we relay on technology, the more vulnerable we become. Think of how best to defeat the USA and bring her to her knees? A clever hacker that attacked the power grid would cripple us. This technology as with all technology, has to be powered and controlled, if that power source is interrupted or control seized, then what? And that is presuming that this AI does not decide on its own, to bite the hand that feeds it. Perhaps more than any of that is the ridiculous amounts of money being spent. This is just my own view, but I have the greatest difficulty accepting spending billions upon billions, on this program, when average everyday Americans are going without healthcare, food and even a roof over their head.
ak bronisas (west indies)
"appropriate levels of human judgement "is the fatal oxymoron for use with any human designed weapon .........but when applied as a parameter of control for autonomous death dealing machines.............its the epitome of pathological sociopathic insanity......... as a result of the institutionalized opportunistic corruption of politics,ruled by the military industrial complex and their web weapons industries and financiers...................we have opted for self destruction " by a thousand cuts" from the ever increasing variety of weapons.............unfortunately "one comes to be of just such stuff as that on which the mind is set"..........when fearmongering and warmongering are not opposed by the people!
Samsara (The West)
Do we need further proof that people who seem almost mentally ill in their unconcern for human life are in leadership positions around the world that will allow them to plunge our civilization into chaos and even oblivion, perhaps in this century?

Anyone who is not already terrified needs to read Annie Jacobsen's book, "The Pentagon's Brain: An Uncensored History of DARPA, America's Top-Secret Military Research Agency."

And we can be sure the unelected "rulers" at the Pentagon have plenty of horrifying weapons and sinister plans that are classified and unlikely to appear on the front page of the New York Times.
Sheryll (Berkeley)
EVERY lethal technology developed by the U.S. military eventually gets into the hands of criminals.

Drone bombing, run by humans, terrorizes simply by hovering overhead, frequently kills civilians, ignores our mandate to employ capture and legally try, and results in PTSD in operators. It is wrong and should not be used at all. (Nor should drones be used by ordinary people in cities or by commercial operators (such as Amazon) -- for obvious reasons.)

[Note: $500 BILLION of taxpayers' hard-earned money should not be given to the Pentagon. This should begin to be cut WAY back.]
new world (NYC)
Quotes by Telford Taylor to ponder.
The laws of war do not apply only to the suspected criminals of vanquished nations. There is no moral or legal basis for immunizing victorious nations from scrutiny. The laws of war are not a one-way street.
The Anatomy of the Nuremberg Trials: A Personal Memoir
To punish the foe — especially the vanquished foe — for conduct in which the enforcing nation has engaged, would be so grossly inequitable as to discredit the laws themselves.
Nuremberg and Vietnam: An American Tragedy
JBK007 (Boston)
"Mommy loves you" - Yolandi (Die Antwoord), to an AI droid, before it assists in an armed holdup, in the movie Chappie
godismyshadow (Lombard, IL)
Klaatu barada nikto. Robots will police the world.
_DB (Spain)
In ten years, these things will free Pentagon from the hassle of the veterans (even current drone operators are faced with consistent levels of stress that can result in them having PTSD, at the end of their service).

In twenty years , I expect to see these machines used to patrol cities, in countries where the 1/1000th of the population will actively fear the unemployed 99%, with a prevalent "shoot to kill" policy.

(Well, a good 9/1000 will be needed to refurbish and maintain the drones and the other robots that will take care of most menial works like, say, civil litigations).

In fifty years, they will probably not be needed any more, as said 99% will have withered away, one way or the other.

Dystopian? Maybe.

Personally, I don't care - I will be dead and I have no offspring.
saquireminder (Paris)
Isaac Asimov wrote a short story about how all human wars were being waged by computers...when there was a computer breakdown the side that then won the war was the one who knew how to add and subtract, using their own hands and brains.
We are headed there, on a rocket ship. This is the latest sanitizing, virtualizing of a human activity we need to keep in our faces for our constant reflection and consideration and not hand off to our oh so clever machines.
PH (VT)
Eventually another arms race of necessity. Only no MAD exit point this time. Ich ....
carlson74 (Massachyussetts)
Then don't use them.
Jerry Sturdivant (Las Vegas)
How is this different than the anonymity of artillery? If you’re going to all the trouble of having your weapon pick out subjects or targets to shoot, why not add the component of remote human approval of Shoot, Don’t Shoot, as is presently done with the drones we’re using today?
Steve Bolger (New York City)
There really is no mystery to Fermi's paradox anymore. Technology quickly becomes a self-fulfilling planetary death wiish.
Bert Floryanzia (Sanford, NC)
Artificial Intelligence weapons, or indeed, all devices that
rely on modern microelectronic circuitry, can be rendered
useless with Electromagnetic Pulse countermeasures.

No military technology remains ascendant for very long.
Richard Silliker (Canada)
Not to worry. It is not going to happen the way they are hoping it will.

Artificial and Intelligence are two distinct contexts. It is impossible to map intelligence outward which is what they are attempting to do. Machines lack the intrinsic condition that humans have. Without that ability to build an abstraction machines will always be tightly bound to humans. Simply put machines cannot learn they must be programmed with "if this then that" or "that then this".

Perhaps something useful will come from all this largess to the military. In the meantime the boys can play with their toys.
jacrane (Davison, Mi.)
Here we go. We give own gov't more and more power as democrats want large gov't. How long before something like this is used against us? We already know Hillary doesn't like the little people. That's you and I.
Portia (Massachusetts)
I'm ready to die if this becomes the world I must live in, if this is what humanity has become-- a monster of hubristic, amoral power. Listen, people. We've ruined our beautiful blue-green paradise through our heedless pursuit of wealth, power, growth. We've poisoned the oceans and the soil and the air and the cold black heavens. Our climate is collapsing. At this late hour, if we had any intelligence at all, we'd be humbly begging the gods and every threatened life-form on earth for forgiveness. We'd be putting every resource and talent we have to the task of changing direction. Repairing our vast error. Instead we pursue war, war, war. War by the ugliest means possible -- and torture and totalitarian surveillance to go with it. It's insane, it's tragically wrong, and it will hasten our obliteration.
Joseph A. Riccardo, Jr. (Scranton, Pennsylvania)
Rarely, if ever, has humankind ever invented a weapon it failed to use. The rigors of warfare often change rules of law and civility which we use to govern our conduct. There is a great likelihood that whatever machines, chemicals or weapons we conjure up will be used in the future. Further, the U.S. is not the only country working to develop these weapons. It all makes the world such a dangerous, unpredictable place. The world community needs to develop international treaties governing the use of artificial intelligence in global conflicts. We know that rogue nations, such as North Korea, will look to exploit this technology, with little regard for legal, ethical or human concerns. The spread of weapons of mass destruction, including automated robots, will further imperil innocent populations throughout the world. Sadly, these AI weapons will provide greater opportunities for dictators to butcher their own people. More than ever, the world has become a dangerous place in which we have horrific capabilities to make history by ending it. We need honest dialogue and careful decision making about artificial intelligence and other forms of technologies for military purposes. The stakes are just too high for failure.
Mark Seth Lender (Connecticut)
There are two prominent implications to be drawn from autonomous killing machines. The first is the broad loss of the Humane. Machines, and humans augmented by machines, are in a different ethical space than humans facing other humans (or animals for that matter). Regardless of context, when we are removed from direct contact with other living things, our Humanity suffers. And that is no small thing. The second point is crowd control - target the man with the gun? Or perhaps the protest placard? Not hard to imagine.
Dan Stewart (NYC)
The U.S. is the most warring nation on Earth. War and militarism is at the core of our culture, our values and our national identity and consciousness. To think that we will not embrace and exploit all manner of autonomous weaponry is the triumph of hope over experience.
woodsbeldau (Bloomington, Indiana)
The US is at peace. Wars are happening elsewhere. The US is involved in some of those wars because it is the only really global power. You could walk across the entire US and not see a soldier.
The autonomous weapon offers the chance to significantly reduce casualties. It also offers an end point to war. War is a consequence of the failure of rational decision making. If decision making is increasingly rational, then the likelihood of war should decrease.
"Hummmmm" (In the Snow)
Global Warming is going to alter the very face of this planet. Resources will diminish. Disease will grow. And for some reason, humanity will choose to kill and maim each other for power over those resources vs coming together for the good of all. The word synergy is lost on this world. But should any of us be surprised...you don‘t have to go to the furthest outreach of the poorest countries to see humanity at its worst...just drive around the richest countries in the world around the most lavish of neighborhoods, it is easy to see the cancer of our existence. Now, we are fine tuning the means of our final days.
ak bronisas (west indies)
Hmmm........your brilliant last line "we are fine tuning the means to our final days" not only describes the ultimate result of perfecting the self sufficiency of the AI autonomous drones........but is a perfect(one line) response to ALL articles like this in the NYT...............which cant seem to find ,investigate or expose the past and potential catastrophes , when it waxes lyrical about implements and attributes of the military industrial complex and the trillion dollar modernization and miniaturization of the American nukes........the coverup of Fukushima and real death toll of Chernobyl..............this helps fine tune the means to extinction..........response please !
A.J. Sutter (Tokyo, Japan)
This ought to be in the same category as poison gas and biological weapons. Countries will research about them, and especially about countermeasures, but we need to have a United Nations treaty whereby countries foreswear actually using AI-based weapons, particularly those that make autonomous decisions to use force.
Mr K (Los Angeles)
This is a crazy state of affairs. It should be stopped
Vladimir (Los Angeles)
Russia has it already. My robot drone vs your robot drone.
fly-over-state (Wisconsin)
We’re well beyond the question of “should we do this?”. Time and technology march on. But, to accept it as unmanageable would be irresponsible – and fatal. There really isn’t a practical technological “kill switch” to any of this. We could build one but why would those of nefarious intentions bother with it. For, as was mentioned, this technology will be widely available to all (including enemies and criminals). Our best hope is world-wide agreement and enforcement/consequences that can only come through diplomacy and understanding. Our best hope for a benevolent and peaceful outcome to the blistering-paced evolution of these technologies (and many other global issues) is a strong, viable United Nations that works collaboratively with all players to deal on a polices/regulations level to regulate and prevent misuse/abuse (to the extent this is possible). I get the concerns of the dysfunction and toothlessness of the UN, but this is no reason to just throw our hands up in defeat; it is our only solution to this and many other global challenges. Let’s make the UN work. There are no other “long term” solutions for this issue (and many others) beyond diplomacy and education.
Matt (New York, NY)
It is encouraging to see that technology is being used to develop eyes that can discern friend from foe better than ever before. To have an advantage in combat because you can identify your enemy more easily and anticipate his attack more effectively is common sense and should be pursued with a generous amount of research funding.

The frightening threat that doesn't appear to yet have been fully explored is swarm warfare. Nature's most successful breeds - fish, bird, cattle, insect, or human - have evolved techniques for traveling in large numbers that modern war machines haven't fully incorporated. When bomb or gun toting drones overwhelm the defenses of a front lines U.S. combat encampment for the first time with deadly results, the new battlefront will have truly opened.
John Sinclair (Dundee)
We saw robot human beings owned and trained by the pentagon murder innocent people in the video that Bradley Manning is serving a long sentence in prison for. Those robots who committed the murders are still walking free. I think I would rather have real robots doing the killing as then it is more likely to be for a reason and we would be able to follow a direct link back to those in power as they could no longer blame "rogue elements" as they did in the Bradley Manning case, rogue elements that for some reason they still won't prosecute. There is one problem though. If robots are given the ability to kill at will and with access to the web they read and understand Christianity and so become "muscular Christians" .... might they not decide to wipe out those giving the orders to murder other human beings rather than the target human beings? I think if they did God would surely forgive them and one thing is for sure and certain a truly sentient being such as them would believe in God and his salvation.
Dodgyknees (San Francisco)
It is appalling that we are locked in a spiral of development of ever more sophisticated ways to kill each other when with a fraction of the resources we could eliminate most of the root causes of conflict in the first place.
James Luce (Alt Empordà, Spain)
"What are your thoughts on the United States and other countries using artificial intelligence to build weapons that can think and act on their own?"

This question posed by the NYT illustrates one of the major difficulties confronting both the ethical and technical ambiguities arising from AI killing machines. To date and onward to the foreseeable future AI machines do not “think” (meditate, imagine, surmise, suppose, contemplate, ponder, invent, create, etc.), but rather merely analyze data at impressive speed and then blindly take action based on their programs. An AI killer does not ask itself “whether it should kill”, rather asks only “when it should kill”. The difference between a soldier-on-the-ground and a fully autonomous AI killer is that there will be no thought immediately preceding the lethal act. But then does that really matter? A bombardier in a high-flying B-52 does not ask “whether to drop the bombs” or even “who the bombs will kill” rather only “when to drop the bombs”. The point is that removing from the battlefield the decision whether to kill removes the last shred of “humanity” from the “inhuman” slaughter we call war. Perhaps this is not a good idea for the future of our species.
Thomas (Singapore)
Great idea, great technology, its use should be restricted to the country of its origin.

So, if the US wants to use these drones on their home turf, great go ahead.
This would be an entirely domestic issue and if the US's citizens are OK with it, no problem there.

As soon as these machines are brought across the US borders to any country outside the US, shoot them down and destroy them, that is including all support personnel and infrastructure.

Such machines are not acceptable in a civilized world.
Anyone who uses them is to be killed just to show them what they are attempting to do to others.
Killing by mechanical decisions is the way the 3rd Reich killed.

Maybe then they will learn that the rest of the world is not a shooting range for US state sponsored terrorism.
Mark (Long Beach, Ca)
It seems that the biggest danger to America would not be humanoid looking "Terminator" soldier robots but rather the possible shadowing and attacks on our high high profile military assets such as aircraft carriers and submarines by autonomous robots.
Louise (Washington, DC)
"We scientists, whose tragic destination has been to help in making the methods of annihilation more gruesome and more effective, must consider it our solemn and transcendent duty to do all in our power in preventing these weapons from being used for the brutal purpose for which they were invented." and "I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” --Albert Einstein
n.dietz (Germany)
"American submarines went on to devastate Japan’s civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime."

I'm fifty years old and interested in history and stories of the War, but I never heard this fact before. Besides all the other scary stuff in this article, another case in point that war is dirty business.
EE (Australia)
There are dangers associated with the use of extra-judicial killings and the murder of innocent citizens who find themselves in the wrong place and time or because they have been misidentified. Algorithms are already known to be prejudiced due to the selection of variables by programmers and also to return false positives; which are an inevitable outcome of statistical analysis. History shows that ordinary people will rise up against despotic rule. The US needs to be very careful and transparent in how it deals out "justice".
John (Pa.)
It's as simple as this:
If we don't, our enemies will.
If anyone has any doubts that Islamic terrorists would refuse to use any weapons because it would be unethical or inhumane, they are living in a fantasy world.
Jeffry Oliver (St Petersburg, FL)
Many scientists are deeply concerned about the emergence of AI. They are concerned even if the developers of AI have the most benign intentions for said creations.
Ask Stephen Hawking what he thinks about developing an AI whose raison d'etre is to kill.
Bob Wessner (Ann Arbr, MI)
I find this very troubling. One would hope that putting our youth in harm's way (war) would give us pause to ensure its worthwhile and justified. Remove this assessment and replace it with AI tools just makes it easier interven militarily via executive orders alone. Add to this the belief that when the Pentagon develops a new toy, they will find ways to champion their use.
David Stevens (Utah)
Agreed. I envision continuous warfare. Report to the death chamber. Your number came up.
Bos (Boston)
Quite frankly, if humans fail to be a good steward of this earth, a fully conscious AI may indeed want to wrest control of civilities, if not civilizations, from the species. All the talks about freedom are just vanity
Steve Bolger (New York City)
Freedom is apparently anathema to people. We worship the very worst psychopaths among us.
NorthernVirginia (Falls Church, Va)
Pure fantasy. Nature, not AI, will fill the vacuum if humans cease to dominate. What is AI's answer for sinkholes, earthquakes, floods, lightning, corrosion, heat, cold, sunspots, kudzu, the Eastern Grey Squirrel, falling trees, etc.?
Andy (Currently In Europe)
To all those commenting about how AI should be used for better purposes, that we should work to end war rather than designing killer robots, and so on: these are all beautiful ideas but the reality is that humanity has always engaged in wars, it is genetically wired to do so, and the more the world population grows unchecked, the more wars will erupt.

The ugly truth is that even the most civilized, enlightened, harmonious peace-loving utopia will always need weapons to defend itself from those forces of evil that will always pop up to destroy it. Yesterday it was the Nazis and the Japanese dictatorships with their nihilistic worldviews; today it's extremist islamism incarnated by oppressive, nihilistic cults like Isis or the Taliban; tomorrow it will be yet another monstrosity seeking to destroy modern civilization.

If robotized weapons will help us to keep our young men and women away from the battlefield, prevent horrific injuries and life-debilitating PTSD, then I wouldn't be opposed in principle to their development. Let's just make sure that a failsafe "kill switch" is always available...
L’Osservatore (Fair Verona where we lay our scene)
There is a parallel between mankind's tendency to create wars and the gun-rights issue in America.
Before anyone decides to rely completely on machine warfare, we had better deal with the lust for conquest among cruel men first.
Before lying politicians decide to take guns away from Americans, they have to deal with all of the threats to safely affecting families FIRST.
Robo (NYC)
"...tomorrow it will be another monstrosity trying to destroy modern civilization."

Autonomous weapons, perhaps?
Steve Bolger (New York City)
War is innate to a species that evolved at the technological limits of population growth.
Lawrence (Texas)
The creators of Skynet did not envision it that way either, which is kinda the point.
Greg (Arizona)
Oh the kind of horrors we will soon release upon ourselves will make mustered gas seem chivalrous.
Lawrence (Washington D.C.)
what a grand nightmare.
When will Hal decide that it is best to do away with all humans.
Lawrence (Washington D.C.)
Self replicating killer drones enhancing their own intelligence.
Which population will be expended first to obey the directive.?
Which of my actions move me up on the list?
Enjoy life while you can.
From Cupertino (Cupertino)
Software engineers know that there is no code unbreakable. There will always be evil geniuses and black-hat hackers. There will be insiders that leak information or work against organization's intent. NSA and CIA cannot keep their own secrets. The future spies will not just steal state secrets, they may insert hidden code to turn the automated weapons against their own creators. SkyNet and Terminator stuff definitely can happen!
Lots of people say this is unavoidable. This is bogus! If the whole world forms one government and demilitarizes every nation, there won't be weapon competition and slippery slope. It is not too late to start on this direction or we human will be doomed.
NYHUGUENOT (Charlotte, NC)
" If the whole world forms one government and demilitarizes every nation, there won't be weapon competition and slippery slope. It is not too late to start on this direction or we human will be doomed."

I'm laughing. You obviously have no knowledge of human nature. Attempts to build your model on a small scale have never worked. The Soviet Union was one such attempt. The people were stripped of the means of production and disarmed. Those with arms then ruled the populace as dictators.
As long as the humans are in charge greed and the need to control others will win out,
Leave Capitalism Alone (Long Island NY)
Trying to create a demilitarized flat Earth will itself cause armed conflict.
Thomas (California)
For the past few decades, each president has formed a commission of ethicists, philosophers, and scientists to advise the government on issues of bioethics, such as cloning and stem-cell research. The idea is to understand the ethical implications of emerging advances in medicine science before we inadvertently cross a Dr. Frankenstein line.

Maybe we need a similar commission to study AI and autonomous weapons, and to advise the president and the Pentagon on what they may be about to unleash. The advance to lethal AI crosses a line that is unique in the history of warfare -- while warfare has changed drastically over time, one constant factor is that the decision to end a human life is always taken by another human. Eliminating that factor would pull the rug out from under the laws of war and humanitarian principles, to say nothing of criminal law. The decision to cross (or even approach) that line shouldn't be taken by generals or military officials, whose priority is to keep us ahead in the arms race. It needs to be reserved to our civilian leaders, and they need to make sure they are as aware as possible of where that decision could lead us.
Ruralist (Upstate NY)
We have principles of due process required if the state is to take a life. Those principles could be embedded in the AI. Doing so is a moral choice on the part of the leadership. Not doing so is unconstitutional.
Did St Ronald Pass The Bar? (Austin Tx)
It is worth noting that the current expert in remote controlled weapons are in Syria where clever mechanics are using off the shelf equipment (laptops, web cams, servos) to control guns.

At some point we and the other major powers will build these more or less autonomous weapons. We won't use them against each other because that could escalate. So we will use them in asymmetric warfare against terrorist/freedom fighters. And then the people we are fighting will use this technology against us.
Wut U Kno (Los Angeles)
Amazingly insane idea. Does this need any debate.
Chris (Phoenix)
Skynet is coming... a bit slower than originally envisioned, but once we have AIs that get to decide on their own who to kill, it is just a matter of time. It make take 50 or 100 years, but eventually someone will make an AI that has the capability to kill and no built in inhibitions to stop it... and then we will be in serious trouble.
Ken Belcher (Chicago)
I hope the primary mission the Pentagon envisions is the deployment of Gorts around the world - starting at the Pentagon and spreading to the rest of DC.
T (Ca)
If you don't pay your Amazon bill after the drone delivery, guess who's gonna come pay a little visit next?
James Gherson (Maine)
" When it comes to decisions over life and death, “there will always be a man in the loop,” he said. "

I feel safer already.

....And THAT worked out so well, in Ferguson and elsewhere.
Straight Furrow (Norfolk, VA)
Please stop with the familiar left wing phony argument about an "arms race." You could make the same argument about any weapon ever built, to include bows and arrows.

These weapons will be built, either by us or our enemies. If the last 400 years are any guide, those who fall behind in military technology have suffered some pretty dire consequences.
David (Gambrills, MD)
The Terminator analogy is a weak one. The real-world "terminator" would be easily destroyed or, worse, incapacitated and turned around for use against its creators. Our adversaries are not unsophisiticated rustics. Absolutely read Philip K. Dick's short story "The Davids." If that doesn't put you off this idea, you're full of springs and gears.
DKinVT (New England)
For a notion of how unfortunate a trend this is, consult Fred Saberhagen's science fiction series about Beserkers, autonomous weapons, relics of a long ago intergalactic conflict that now roam the cosmos looking for life to snuff out.
Max (NM)
I think its very simple. To end the world as we know it a weapon must meet two conditions: first it obviously must be able to do damage on a global scale and second it must be sufficiently easy to get that some random crazy will get access to it and use it. Nuclear weapons were the first to meet the first condition but luckily didn't meet the second thanks to the inherent difficulty to get the raw materials. Now we're on track to make autonomous weapons that meet both the first and second conditions. That's a recipe for disaster and yet I can't see how you could stop this development, innovation is in our nature, if we don't do it someone else will anyway, and a global ban just seem impossible in our divided world, so it's the perfect storm for our species. Maybe just maybe there will lie the answer to Fermi's paradox... I just hope not
Dan Stewart (NYC)
"...but luckily didn't meet the second thanks to the inherent difficulty to get the raw materials..."

But the "crazies" did get it, and they used it in Japan, twice.
Hanna (Berkeley)
'I don't see how you could stop this development.....If we don't do it someone else will just do it anyway.'

Then we will need to ask someone with vastly better imagination than you to Just Stop It.
Paul (Manhattan)
It is only a matter of time before AI drones become a weapon of the terrorists on our soil. This will happen all the more quickly if we develop them first and deploy them against them. They capture one, learn how it works, transmit the code to a domestic terrorist who buys a drone from Amazon and deploys it.

The future is very bleak unless we can find a way to make peace in the world.
sissifus (Australia)
"When it comes to decisions over life and death, “there will always be a man in the loop,” he said."
Not much reassurance in that.
Rugeirn Dreienborough (Lost Springs, WY)
"Is being killed by a machine a greater violation of human dignity than if the fatal blow is delivered by a human?"

Is there a more completely idiotic way to think about getting killed? Last I heard, death is fatal just about 100% of the time. Dead people don't think about their dignity very much.
G W (New York)
Robert Oppenheimer remarked that the detonation of the first atomic bomb brought to his mind words from the Bhagavad Gita: "Now I am become Death, the destroyer of worlds" Sounds about right.
Lauren (Vancouver, Canada)
Why do you Americans always act as if you're about to be attacked any minute? It's a country-wide climate of fear and distrust on unseen levels. Your defense budget is 3 times larger than Russia or China, they literally can be squashed like bugs unless they unite together, which won't happen; China will never declare war on you, you're their biggest trading customer - globalization was never really meant to address economic concerns, it was a method to prevent anything near the World Wars from happening ever again, which it has succeeded in doing

Finally, 'Terrorists' shouldn't even be a presidential or military issue, their political impact and combat strength is so small - aside from the outlier of 9/11, terrorists kill as many people a year as people die in car crashes per day. In fact, you are more likely to win the lottery, get struck by lightning, drown in a bathtub, or get killed by a moose than be wounded, killed, or know someone who has been wounded or killed by a terrorist.

Their impact would be zero if you pulled out of the middle east entirely - Without the constant collateral damage to civilians caused by US interference, ISIS gets no recruits because, surprise, nobody's mad at the western powers anymore.

As for the actual robots debate, autonomous combat robots would make starting a war have no political consequences, because the generation's youth aren't going to all die. This is a horrible idea, as it effectively legitimizes world war.
Alix Hoquet (NY)
Not all Americans. A minority of paranoid loud ones.
Frederick (Philadelphia)
You come across as a person who assumes the US Defense Budget is about war fighting. Our Defense budget is about things like this article. The Pentagon is a big research/venture capitalist funded by taxpayers. Most of the US Defense budget is quietly directed into institutes, think tanks, universities and of course private defense contractors that pay people to come up with concepts like this one. Our Defense Department is the worlds largest "investor" in global R&D aimed at every aspect of human life. So do not be surprised if sometimes those minds come up with stuff like this. On the other hand I could also point out that the real mission of any nation's National Security infrastructure is to think of the things that the rest of us always assume will never happen until they actually do happen. You know, like when a bunch of poorly educated Saudi men drove two planes into two office towers in the busiest city on the planet. American's and their government are not "afraid" of the people of the world, on the contrary America and it's citizens are the most engaged nation on this planet. We have just learned to be slightly more wary and less trusting even as we strive to mold the planet into the ideas and beliefs we all take for granted at home.
Hanna (Berkeley)
Yours is the most comprehensively intelligent viewpoint thus far. And it maybe takes the perspective of a Canadian. (Or maybe Canadians are just more intelligent. In any case, thanks.)
andrew (dc)
This is a done deal. The software has been created and there is literally no programming code on earth that hasn't been leaked, hacked, shared, or stolen. So the concerns expressed are correct, we will eventually be at a point where any hacker can outfit a drone with this kind of intelligent killing capability. Whether they can get access to the weaponenry is another question, but certainly any unfriendly state can and probably quite a few terrorist networks. Welcome to the future.
Hanna (Berkeley)
This is what I said above. 'Every lethal technology invented by the U.S. military has been eventually used by criminals.' The U.S. military, having smart but not wise personnel, and a lot of cash to use up, is having a lot of fun inventing things, things that, while designed for protection, will be stolen by nuts and criminals and used against innocent people. It's inevitable. Yet military technicians continue to walk right into activities which will come right around and bite them.
Dan S. (Phoenix)
It seems obvious that a drone-hunting drone is the next step. It doesn't have to very discriminatory -- just shoot everything that "hovers". Of course then we'll have DHDHDs and then it just cascades from there. And more obvioulsy make an AK-47 that looks like a camera.
CityBumpkin (Earth)
The most disturbing part about this is that there are legitimate reasons for "killer robots." Look, I am as scared of killer robots as anyone. But let's think about killer humans for a moment.

Look at human beings and the errors we make. Look at all the friendly fire incidents throughout military history. Or a police officer sees a mentally ill man angrily waive his cell phone. The police officers thinks the cell phone is a weapon and opens fire. A robot could avoid that mistake.

Or consider self-driving cars. There was an uproar about the man who died in his self-driving Tesla (turned out he disregarded operating instructions.) But look at how many lives are lost to human drivers each year: drunk drivers, texting drivers, drivers falling asleep, drivers who simply weren't paying attention or reacting quickly enough.

Robots that can kill by themselves is terrifying, but so are humans that can kill by themselves.
Hmmm (Seattle)
And when these things get hacked???
Ofelia (Santa Cruz, CA)
I shudder to think of the day when a terrorist, home grown or otherwise, uses a drone to kill and / or create mayhem. Imagine anyone of the people from the last few years who committed acts of violence against innocent groups of civilians... then, imagine one of them with access to these technologies. How long before this nightmare becomes real?
Dan Stewart (NYC)
The U.S. has "committed [untold] acts of violence against innocent groups of civilians" and it is the U.S. that is building these weapons. You can bank on the "nightmare becoming real."
Andrew Huang (Boise)
Let's extend this toward a logical future. With their intelligence, speed and sensory abilities, soldiers will not be able to run or hide. But keep in mind that all sides will have this technology and then being a combat soldier is simply suicide.

Yet wars will still happen and prevailing in war will require these robots to target something besides armed soldiers - there simply won't be any. What's the next target? We'll very quickly toss out our inhibitions against targeting unarmed soldiers, then politicians, then ...civilians, then ...children?

Yet on the other hand, it would be irresponsible not to lead in this technology because we don't want to be playing catch-up.

I don't like the conduct of war that this creates. We need to grow up.
David (Gambrills, MD)
See Fred Saberhagen's SF series on "The Berserkers." https://en.wikipedia.org/wiki/Berserker_(Saberhagen)
Stourley Kracklite (White Plains, NY)
A small group of your political opponents decides in secret whether to kill you with a drone or not.
Hanna (Berkeley)
(With complete unaccountability.)
Nikko (Ithaca, NY)
It's not just killer robots. The military is actively seeking to develop autonomous machines that harvest proximate resources and turn them into viable weapons and ammunition, without any human involvement.

Here is how our species ends:

1: Autonomous weapons are created, capable of sustaining themselves and their munitions without the pesky latency that comes with waiting for commands from an operator in Nevada.

2: Eco-terrorist hackers discover a security flaw - there will ALWAYS be security flaws - and program the bots to kill all humans (what were they thinking naming that robot "Bender"???) in order to save life on Earth as we know it.

3: After the last of humanity is extinguished, the killing machines do not extend the boundaries of science and discovery and continue the grand path that our ancestors forged when they first stepped out of the savannah, they simply go on until they stop.

It was nice knowing everybody.
G. Nowell (SUNY Albany)
A land mine is an autonomous killer. It is set in motion at some point in time, and consults no one when it detonates. The decision matrix is the presence or absence of foot pressure.

But it kills with no one supervising, for decades. So we already have these autonomous killers. We're just about to get more of them.
Moira (New Zealand)
Thatis such a terrible, terrible idea. Why would you do that. No,
Rh (La)
At least someone in the pentagon is having a onversation about the ethics and morality of killer robots. Foreign societies will not even deign to go through a internal introspection on that score.

Where does that leave the USA at crunch time when faced with an army deploying clones with no morality at its core. Something to rethink about as we debate the issue of using these robots for less than benign purposes.
Otto (Rust Belt)
There are a number of credible thinkers, Yuval Harari, Stephen Hawking, etc. who believe that the human race has just about run its course. I reluctantly agree. As a species, we are not sane. What worked for us 10,000 years ago will not work today; we are a victim of our own brains, wired for local action, capable of global destruction.
DannyInKC (Kansas City, MO)
Apple opened a DEVELOPMENT operation in China!
APS (Olympia WA)
Will the amazon drone be able to use facial recognition software to vaporize anyone opening my package who isn't me?
Paul P (Pelham)
Simple. This problem has been solved already by Isaac Asimov in 1942:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Just change "Human Being" to "Americans" and we're good to go!
Rick (New York, NY)
"The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them."

Independence, as in ability to make their own decisions with no human input? Uh, how about zero? Are they really serious about this?

I'm e-mailing a link to this article to myself to have as a reference in case World War III does break out in my lifetime. I am serious about that.
ChesBay (Maryland)
Rick--Seems like those with nothing to lose feel comfortable with these kinds of policies. I prefer my military leadesr to have some actual skin in the game.
Peter (CT)
I would like to such platforms deliver lethal doses
of computer virus and biological virus or nerve gas
to a very specific target, You can run. It you can't hide!
G. Marks (Alfred, New York)
and when they get hacked?
ChesBay (Maryland)
G. Marks--...which they WILL.
John K (Brooklyn)
AI is not the issue as much as who deploys it. And forget the battlefield, for every high-tech punch thrown, there will be an equally effective high-tech counter punch (or one in development by an arms industry eager to create, and bill for it.) AI’s wide accessibility, ease of use and low-cost barrier to entry is what really concerns me. Imagine far from the battlefield autonomous cars or drone delivery services where passengers and packages are instead swapped out for payloads. What will be the counter punch to that?
Steve the Tuna (NJ)
It won't be too long before Boeing, Raytheon, General Dynamics, Carlysle and the rest of the military industrial complex amass automaton armies that will be for sale to the highest bidder, friend or foe. When "killer robots" are no longer sci-fi dreams, it will behoove everyone to have their own little "bodyguard bots" that can sense and detect these autonomous drones and block their sensors or otherwise disable them. Welcome to the robot arms race, where fewer and fewer live soldiers are exposed more and more humans come into the sights of systems without a moral fabric, guilt, or consciousness. So much easier to get a drone to bomb a few dozen kids in order to get to a 'high value target'. There's is an algorithm for that. Maybe these killer bots will become so powerful they will realize they don't need their creators and proactively choose to eliminate the human uncertainty principal from the equation. This should be the last nail in humanity's coffin. We have become so efficient dealing death, we have outsourced it to the lowest ethical denominator. If we spent this kind of money on combating climate change, we'd probably save millions of lives this century, but what's the fun in that?
Leave Capitalism Alone (Long Island NY)
We can look forward to the eventual civilization of the technology which will give us Brad Pitt and Jennifer Aniston bots
Hanna (Berkeley)
My younger cousin works as an engineer for Boeing. He is a borderline sociopath. I know the kinds of abuse perpetrated on him by one of his parents. I doubt he has much in the way of empathy. He has always been extremely smart.
Patrick Borunda (Washington)
In 40 years of strategic management consulting in multiple industries I learned you can always seize an advantage borrowing lessons from other than your clients’ usual suspects (e.g., electric utilities can learn a great deal from trauma medicine).

The United States major contribution in WWII was being the arsenal of democracy. Behind two oceans, we ran our factories 24-7-365; turning out enough war materiel to equip ourselves and our allies. Finally, together, we overwhelmed the Axis.

When the United States launched Rolling Thunder against North Vietnam’s fuel depots and factories in 1965, the Vietnamese response was to decentralize fuel storage and fabrication of war materiel. Rolling Thunder was the most destructive use of firepower in history. North Vietnam won.

Fast forward to 2016; Rosie the Riveter has been replaced by Roger the Robot (“A fabricating fool”). Three Rogers can fit in a standard garage; sub-assemblies are purchased at Radio Shack and Walmart. Lethal drones are programmed by skilled hackers and Trumpian fools (“If we’ve got nuclear weapons, why can’t we use them?”) domestically and overseas.

An arms race to lead the world in using this technology is a fool’s errand. The real race is find credible means of preventing regional disputes from becoming flashpoints for asymmetrical warfare which the United States will probably lose. Cell phones are among terrorists most potent weapons around the world.

It’s time to examine our premises.
Paul Cohen (Hartford CT)
If we were not so concerned about maintaining an empire there would be no need to throw all these billions of dollars into killing technology. The threat is not the USSR, not Russia, not China, not a nuclear missile gap. THE threat is the United States because once deployed we will use it. It's just another technological wonder in our arsenal to continue are perpetual wars of aggression. Just think, once it's perfected, our hyper-militarized municipal police forces will have them too.
Larrry Oswald (Coventry CT)
It is doubtful that John Conner type time travel will EVER happen. So perhaps it is our job now to "take care" of the beginnings of the inevitable machine wars. Building the things is easier than controlling them. There's the rub.
Tallydon (Tallahassee, Florida)
Maybe our human intellect is only a stepping stone in the evolution of a vastly superior intelligence and AI is the next step. Good for AI, not so good for us. Will we have the wisdom to stop the creation of it and avoid extinction? Doubtful. Scientists knew the creation of the atom bomb would ultimately threaten our existence on this planet but created it anyway because they could. The same with AI. I need a beer.
ChesBay (Maryland)
Tallydon--I think I might like to join you. And, I don't drink. :-(
Tallydon (Tallahassee, Florida)
We might need several beers. The larger question is why are we always just so stupid in producing technology that can kill us?
ChesBay (Maryland)
Tallydon--I tried to teach my kids to ask themselves: "If I do this, what's the WORST thing that can happen?" Seems simple to me. Wonder why the "geniuses" never do that? Unskilled in the basic activities/decisions of everyday life? Willing to risk humanity, in pursuit of money?
DaveD (Wisconsin)
The Obama administration has ordered the killing of hundreds of individuals in so-called signature strikes and an unknown number were non-combatants; at least 3 of these US citizens.
The real question is: will a restraint be placed upon presidential extra-war fighting abilities before it's too late?
Erika (Atlanta, GA)
As Elon Musk has pointed out time and time again the main concern with AI is not that it will conduct a Skynet-type takeover but that it will accept orders while using its own interpretation of those orders. That could mean expansion of the order scope or other applications not known to humans at this time.

Elon Musk elaborated on this thought in an interview about AI, one of many he's given about the subject: https://youtu.be/_ChGhnbCy6g

"I think that the biggest risk is not that the AI will develop a will of its own but rather that it will follow the will of people that establish its utility function....If it is not well thought out - even if its intent is benign - it could have a quite bad, uh...quite a bad outcome...If you were a hedge fund or private equity fund and you said, 'Well, all I want my AI to do is maximize the value of my portfolio,' then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war. That would obviously be quite bad."
garrett andrews (new england)
Don't kid yourself. This is the end. You don't see it because it is happening slowly, if inexorably.
Student (New York, NY)
this is inevitable and i completely support the development of this kind of technology in order to maintain our military edge in the event of a more conventional war. i don't, however, support the use of this technology in the way it is most likely to be deployed- in the prosecution of shadow wars against "terrorists", drug cartels, etc.

when we are in a full scale conventional war, we are all, on some level, involved.we are all aware. when we have boots on the ground or even when we bomb from human piloted aircraft( bombers or drones), someone is bearing witness to the carnage we inflict. someone is paying the emotional price for killing. it is important to keep in sight the human cost of military action in order that we not too readily resort to its' use in any conflict of interest. if our citizens blithely shop while our robots kill abroad, without human participation (beyond the programming), we will lose our will to police those who wield our forces. we will cease to be concerned about the suffering we inflict.

to remove the human cost of inflicting suffering and death, to fail to bear witness, would be opening a pandora's box. we must never lose sight of the fact that whatever fires the bullet, be it human or robot, the flesh it rends is the same.
Steve the Tuna (NJ)
When you only have a hammer, every problem begins to look like a nail. You are forgetting that tools called diplomacy, compromise, negotiation, peer pressure, embargo and boycott have worked, when nations have the wisdom and patience to apply them,
JLK (Rose Valley, PA)
If a drone guided by artificial intelligence kills the wrong person, is the programmer chargeable with homicide?
Bert (Syracuse, NY)
There will be literally hundreds of programmers involved. Which line of code was responsible for the death?
Bucketomeat (The Zone)
Perhaps future wars could be fought between the drones so humans don't even need to participate. Like some absurd game of Battlebots.
RamS (New York)
And then the corresponding human causalties can be euthanised, like in a Star Trek episode?
Bucketomeat (The Zone)
Exactly. Nice to see somebody got the reference.
Unbiased (Peru)
The enthusiastic words from the developers of Killing drones are fascinating and nauseating at the same time.

Probably 80 years ago in Germany there were also developers enthusiastically pitching the efficiency and economy of gas chambers to their Fuhrer
Mark (San Francisco)
Did the reporters ask where the drone was made? It looks like a DJI Matrice 600, a six-rotor drone with an optional SDK to alter flight control and some other parameters. If so, why is the Defense Department buying high-tech equipment for weaponry that was made in Shenzhen, China?
Aaron (Ladera Ranch, CA)
The Dallas Police Chief who used a drone to explode an active shooter unwittingly destroyed Pandora's box. He was about 10 years ahead of the Pentagon. The Drone Rubicon has been crossed at home and abroad.
Bert (Syracuse, NY)
This isn't about remotely operated weapons, where the human is elsewhere. This is about AI operated weapons, where the human is not in control.
R Stein (Connecticut)
Man in the loop? Since when? An artillery shell, an air-delivered bomb, smart or dumb, a cruise missile, a land mine, gas, or anything other than a bayonet or maybe a handgun, is an infernal machine not in that loop at all. Hasn't bothered any army yet.
The attractive, but crazy fiction that war will consist only of personally-approved assassinations seems to have gripped the public as well as the military.
The modern objective of war is eliminating the adversary's ability to fight. By whatever means: destroying 'civilian' resources, targeting non-combatants, or even cutting off food (can you say Aleppo?). Killing or disabling actual soldiers is, so far, only an element.
If tomorrow's suicide car bomb is a non-suicidal self-driver, the F-35 replaced by a few thousand bucks worth of drones, mean little flying or crawling bots loosed downtown, and so forth, then you bet it will happen. Nobody will be picky about who gets killed. War has one distinct objective.
Still Waiting for a NBA Title (SL, UT)
So-called "rules of war" are just an excuse an aggressor uses to justify murder while attempting to dehumanize their opponent. The only time that killing someone else is okay is if you don't kill them, they will kill you or someone you love. Expanding that notion to war; if someone is trying to kill me, I am going to do everything I can to make sure they are the ones that die instead of me. Rules no longer matter. But when powerful nations can point to actions that other people are doing that are "against the rules" they can then pretend to be morally superior when they send in their armies and weapons. It doesn't matter what rules you play by the end result is the same. Killing is killing, is killing. There is no moral superiority when it comes to murder.
Newt Baker (Colorado)
Apologies for my naiveté, but I have wondered over the years why we must slaughter and destroy when we could simply incapacitate? This may become moot as cyberwar matures, but if flesh-and-blood battlefields continue to exist (even if the field is a city), why not put the enemy to sleep with a non-lethal gas? Even in policing our own cities, why are we not using tranquilizing darts—ending confrontations without blood? How profoundly this would change the daily news of police shootings.
P2 (NY)
Fantastic.
Now one can hack into these machines and let them kill the citizens they were built to protect. Or make them loose on purpose and blame on a hacker ..
not sure which one we will be told.
Because dead man don't speak.
Fourteen (Boston)
Every day we increment closer to the extinction of the human race. I'm not exaggerating.

Forget drones and consider the lethal combination of nanotechnologies' "molecular assemblers" and Artificial Super Intelligence. Hawking, Gates, Musk, and thousands more consider this inevitable alliance the End. Google it.

In 25 years ASI will surpass human intelligence; we will be number two. And it will keep on going. A few hours after reaching human-level intelligence it will be thousands of times smarter than anyone due to it's ability to iteratively self-improve and secure resources for its own use at silicon speed.

These resources will include your atoms, which it will repurpose. ASI is much worse that a super virus escaping containment, and will inevitably soon be upon us. Your kids will be repurposed and there's nothing you can do.

Read:
http://research.lifeboat.com/anissimov.htm

Conclusion:
Superintelligence will be technologically feasible within the next two decades (Bostrom 1998). Once created, superintelligence will compound upon itself rapidly, resulting in the creation of agents with deity-class capabilities (Vinge 1993). Near-future outcomes ranging from planetary destruction to global apotheosis are entirely possible (Bostrom 2003).

Search Amazon: "Our Final Invention".

Global warming and nuclear war are friendly grandmothers compared to ASI enabled nanotechnology.

And there's no stopping Darpa, China et al - whomever gets ASI nanobots first, kills the others.
Johnchas (Michigan)
A wise man once said that just because we can do a thing doesn't mean we should. The conflict mentality infects humanity and all we can do is create even more imaginative ways to kill each other? Perhaps more nonlethal ways to deal with conflict would be a better path but wouldn't be nearly as profitable as more complex weapon systems. Eisenhower warned us about the mindset behind this type of scheme and we have ignored his advice to our own peril.
W.Wolfe (Oregon)
While the use of Drones is an obvious choice for a tool in Military Warfare, what troubles me here is a Drone's reliability. As with any form of computer hacking, digital information (eg: flight path, firing instructions) can be stolen, altered, and redesigned or deleted.

If Amazon is already looking at Drone delivery of packages to our front doors, it is no quantum leap to see a drone carrying a bomb, or an automatic rifle. But, as with many of our high tech miseries, like the NSA's recent and huge Hack, there is zero reliability in "thinking" that this flying/killing machine is indeed your "friend".

I am more than disappointed in Secretary of Defense Carter. High tech is all well and good, BUT it is not some kind of final, perfect tool for War, and should not be thought of as reliable. Imagine an on-going War without any electricity. Then, you would need qualified, boots-on-the-ground soldiers who can fight with the simplest of weapons.

Drones WILL be hacked, their navigation systems altered, and their "missions" altered as well. I hope that the Pentagon realizes this ugly potential, and doesn't charge Taxpayers with Halliburton prices to accomplish an answer to that.

A machine is a machine. The ability to give "artificial intelligence" that much random killing power is here, but I think the concept is very flawed.

"Close the Pod door, Hal. Hal? Hal ???"
Ted Selker (Palo Alto)
Its nothing new, but the oldest problem we have.

We have had selfguided torpedoes, and bombs for decades, somehow they haven't won the hearts and minds ... ever., only solutions do.
We act as though scaring and destroying will stop people; sorry, a people developes memories of friends and foes that they pass down forever. Please, tell me how we are going to bring peoples together and get them to work on their problems: bringing up children, managing resources and finding legacy in contributions to their heritage and dreams of a world where their people have no fear.

I have been working on AI for decades but as a Quaker and card carrying conscientious objector I try very hard not to contribute to weapons or wargames ever... it is harder than you might think.
13 years since shock and awe .. with us pouring overwhelming technological and fancy technological destruction by us in the middle east has improved what so far?
Kim Susan Foster (Charlotte, North Carolina)
The self-defense strategy begins: "Artificial....." Why not begin with Real Intelligence instead? I would start over, hire new people for new ideas. Really Intelligent people. Very Educated, actually. Artificial Intelligence is a loser for Self-Defense strategy. Accessorizing with drones, is dull and too slow. I am positive that there are Private Companies out-there that are eliminating these military accessory "man handbags", and working on tracking systems located on every individual person at birth. A anti-kidnapping gift along with the birth certificate. Then the person is located and moved/vaporized/teleported. I am not exactly sure what the actual name of "moving the person" is, but I know it is not Disappear! This is just one example. Nuclear Bombs are already antiquated and out-of-date, just like the information in this New York Times article. The NYT Medicine Section could definitely use some updating, too. Good things for The Population in The Future !! World Peace. V (peace-out). Sincerely, Kim.
Stainmaster Zinc (America)
Fred Sabehagan's Berserkers stories are becoming fact.
Who is Good Life (On the side of the killer robots) and who is Bad Life (Defends All Life) is going to become THE question for the next generation.
Mr K (Los Angeles)
This all reminds me of my thoughts for a long time regarding at what point terrorists everywhere cook up recipes to kill with drones in combat and in violence in cities and remote villages around the world. We know how vulnerable the White House is as repeatedly demonstrated. A drone could fly right on in. Political and social rallies are potential targets involving other drone havoc. Plus everything is being hacked so taking control of "good" drones is a potential threat. The government ignored drones until regulation is almost non existent and impossible. Delivery by drone is total idiocy and a novelty act that will go wrong.
Duane Coyle (Wichita, Kansas)
Like it or not, none of us, individually or collectively, can stop the future. More and more, I am happy to have been born an American in the Midwest in the mid-1950s. Then, there was some fear on the part of our parents of mass nuclear warfare. Now, one may worry "they" are going to send a killer "bot" just for you which finds its target by sniffing out our DNA.

On reflection, I rather prefer the latter, as presumably politicians, bureaucrats, soldiers and big-business types would be targeted, acquiring money would be the principal ultimate aim of future warfare, and mass killing of non-combatant civilians could be better avoided, if only in order to save money on armaments. In the meantime, am sticking to my mid-20th Century modern furniture and weaponry.
Mike (Manhattan)
What other species spends its time figuring out new ways to kill their own kind? How is that considered intelligent, artificial or otherwise? In addition to all the horrific pain and suffering humans continue to inflict upon each other the fact we're actively destroying the ecosystem that sustains the only known life in the universe and still going on about our daily lives should be sufficient proof of society’s spectacular stupidity. There is simply no other rational explanation. How else do we wake up every morning and not be overcome with inconsolable sadness by the graphic mutilation we exact on each other and this beautiful planet every single day? Most people feel resigned against the magnitude of such problems while others resort to constructive actions and positive attitudes, which however well intended are ultimately bound to fail as they come from the same dysfunctional state of consciousness that is at the root of the problem. We may placate ourselves with this delusion just to get through the day but: “It is no measure of health to be well adjusted to a profoundly sick society.”
Fourteen (Boston)
“It is no measure of health to be well adjusted to a profoundly sick society.”

Mckenna:

“The reason we feel alienated is because the society is infantile, trivial, and stupid. So the cost of sanity in this society is a certain level of alienation. I grapple with this because I’m a parent. And I think anybody who has children, you come to this realization, you know—what’ll it be? Alienated, cynical intellectual? Or slack-jawed, half-wit consumer of the horses#!t being handed down from on high? There is not much choice in there, you see. And we all want our children to be well adjusted; unfortunately, there’s nothing to be well adjusted to!”

http://theunboundedspirit.com/73-mind-blowing-terence-mckenna-quotes/
Texas Liberal (Austin, TX)
Back in the 80's, my software house was preparing a bid for an Army contract. Our consultant handed us a manual (which I cannot find online) of Army policy that began with the sentence (may be slightly inaccurate in my memory): "The function of the peacetime Army is to prepare for victory in the next war."

Not to avert the war; that's diplomacy's job. But to presume there will be a war, and be ready to win it. That an Army so prepared exists, ready to accomplish its mission, is what provides leverage in negotiations. If war is to be averted, our adversaries must believe we are prepared to win in war, and that cannot be accomplished with sham postures -- only with demonstrated capability.

These weapons will exist. We must ensure that ours are the better, and that our adversaries know that.
ed g (Warwick, NY)
Artificial Intelligence. That explains a lot of things.

Imagine if A. I. was tried in the Congress! Or even in the way American candidates and politicians addressed the American people and the way Americans responded.

Oh gee. What are we smoking now? A pipe dream?
Ace (New Utrecht)
Well Deserved:
Come you masters of war
You that build all the guns
You that build the death planes
You that build the big bombs
You that hide behind walls
You that hide behind desks
I just want you to know
I can see through your masks
Maddie (Portland, OR)
This is step farther down the dark path of war. Taking the decision making away from humans also takes away the perceived responsibility for killing other humans.

Taking another person's life should hopefully leave a disturbing impact on someone, even if they can somehow convince themselves that it was justified. This is what motivates us to work toward preventing war and conflict because we don't want to experience that again. Robots will not learn in this manner.

When killing people becomes hardly distinguishable from a video game, we venture farther away from respect for human lives, especially when we can tell ourselves that the robot made the decision, not us.

We are foolish to be always looking for quicker, more efficient, or more destructive ways to kill people, unless our goal is to speed toward the end of our species.

What's next, autonomous nuclear weapons? We think it sounds ridiculous, but most of us never thought that we'd allow robots to kill humans autonomously.

War ends when we raise our standards of ethics. Lowering our standards only perpetuates it. Why have we not learned this after millennia of war?
Rob Brown (Keene, NH)
I say we use them as hall monitors in public schools.

Think of the savings!
Dan Stackhouse (NYC)
This was actually done in an episode of Family Guy. The hall monitor, something like the bulkier robot from RoboCop, winds up firing on a 6th grader when it can't recognize her hall pass.

I guess such things will help cut down on the number of humans, so they might be useful after all.
R.H. Joseph (McDonough, GA)
When I was a kid in the '50's the news media spoke glowingly of how the forthcoming computer age would render mistakes a thing of the past.
Bert (Syracuse, NY)
“That’s not the way we envision it at all.” -- Robert O. Work, deputy defense secretary

Wow. The people pushing this tech are really that naive? They think it can't happen unless they "envision" it happening?

Now I'm REALLY scared.
Mark NOVAK (Ft Worth, TX)
The genie is already out of the bottle. Drone with a few pound carrying capacity can do a lot of damage with what can carry. Other element are workable over time. Russians are not our real worry, weird people here are the mediate threat.
Maria Rodriguez (Texas)
It would be frightening, but it is more of the same: we use artificial intelligence now: it's called war. We believe it solves problems; it does not. War is simply an attempt for the powerful to continue making the rules. Soldiers simply put the plan into action. In the process humans make lots of mistakes now: they call it collateral damage: oops, we shot our own; oops we shot women and children; oops, we shot our allies. With AI in warfare it will be: oops, that stupid robot just shot me!
Michael Stavsen (Ditmas Park, Brooklyn)
There is no greater example than this of the US preparing for fighting the wars of the past, and not those of the future. Based on current events and the trajectory of history it would seem quite clear that the great powers, the US, Russia and China will never fight a full fledged all out war against one another.
Besides the fact that the damage each can inflict on the other, regardless of who emerges victorious, is more than enough to convince all parties involved that fighting a war with each other can never be worth it, it is impossible to conceive what kind of issue would push these nations to all out war aginat each other.
The wars of the future, which have been going on since at least 9/11 will not be armies against other armies, with civilians left on the sidelines as has been the case throughout history. World War II was the last war where the two sides fought with the same types of weapons and using the same form of fighting. Since then just about every war had an army on one side and a people with crude weapons and determination on the other.
Therefore the idea of spending billions on ever more sophisticated weapons, such as the F 35 is based on a totally outdated notion of war. It is quite possible that continuing to prepare for war with Russia or China has nothing at all do to with national security, and is instead all about the old game of lobbyists, political contributions and powerful men at the Pentagon wanting ever newer toys to play with.
Slipping Glimpser (Seattle)
I read years ago that in training at West Point, cadets shooting rifles at targets are told to "service the target". Imagine that: you step in front of a soldier pointing a gun at you and say, "May I have some service, please?"

To kill people, they must be dehumanized. These AI drones don't even do that. They just act.

And does anyone think that Russia or China are sitting idle on this?
angel98 (nyc)
Before you know these will be available on the black market and to organized crime. local gangs, highest bidders, jilted lovers and anyone who has a mind to own one (although sometimes it tough to see any difference between those who can legally have them and those who can't). OR we could concentrate on evolving and creating a new world vision that encompasses the best for everyone.
It is entirely possible and achievable. People, time to start thinking out of the box of our own making. We've been doing the same old for centuries and it has never, ever worked. Isn't that the definition of insanity!
Philip Aronson (Springfield VA)
"What we want to do is to make use that we would be able to win as quickly as we have been able to do in the past."

In my lifetime the only wars we won quickly were Grenada, Panama, Balkans, and Iraq War I. Two of them were meaningless, one an air war, and the last was at least competently managed in the short term.

As it has been asked:'if there is artificial intelligence, is there also artificial stupidity?' Perhaps so, but the usual natural stupidity seems to go on unmolested.

Using more efficient means to carry out ill-advised, idiotic, or immoral policies does not make those policies any better. Due to the lower risk of losing our men and women to carry out such policies probably make them more likely to be established.
Will Goubert (Portland OR via East Coast)
It's always sickening to see the huge amount of resources we humans spend on killing rather than solving problems we all face as humans. What a shame. All because of greed and fanatical thinking on the part of a few.
SB (San Francisco)
Better information gathering is fine militarily, but weapons that can act on their own? The fact that Russia and China are doing this too does not change the fact that it is still a BAD IDEA. I would hope that we could all do the same thing we've done with poison gas and nuclear weapons and put the idea on ice, at least in terms of actual use. However, I can't see that happening. What I can see happening is this technology being used domestically despite the inevitable and vociferous promises that it won't be. I can also see any enemies we might have doing a much better job of looking like civilians; which means that we will keep killing civilians.
John (Fairfield, CT)
As usual, the Pentagon is fighting the last war. Allow me to say what I think the next war is going to be all about: Cyber warfare. And it doesn't look good for our side. There are millions of potential adversaries in the form of IOT devices that can be harnessed to wreck our economy. These adversaries are not carry rifles, real or otherwise. They are carrying RJ-45 jacks and wireless antennae. And they are unregulated all with the same simple password "admin" just ready for a passing hacker. Until our government realizes how big a threat these little devices can be and passes laws to regulate them (for instance: every device needs to have a unique complex password right out of the box) our national well-being is at stake.
gary (Washington state)
The problem of naive administration of IOT devices is a sleeping dragon of unforeseen scope. As we have recently witnessed, bad actors are already contaminating networked surveillance cameras, DVRs, smart appliances, and toys to perform DDOS disablement of computer networks. The results could affect web services, public and private utilities and services that depend on those web services, and weaponized drones like those described in this article. Did you think Y2K was a nightmare? What will the social consequences be when we can't get to our bank accounts for a week? When our prescriptions don't arrive on time? When we can't communicate by email or text message? When we can't call off a drone strike because of changing circumstances?
Lukas (The Netherlands)
This new automatic robotic weapen seems to suffer from 3 weaknesses: 1. it can be hacked and turned against its own population, 2. if a foreign power hacks and disrupts the country's electricity grit, the weapon alongside most other things in the country, would cease to function, and 3. this weapon is developed for (more) war, expense and suffering, and certainly not for promoting peace and general wellbeing of the citizens of politically opposing countries.
wrbenner (Dallas TX)
Maybe this will be the end of conventional warfare.
Dan Stackhouse (NYC)
Sure, conventional warfare ends every fifty years at most. Conventional warfare used to be melee weapons, then siege weapons, cavalry, and so on. WWI saw the end of cavalry and the start of air war. No matter what, humans always come up with better ways to kill each other, and so far, every time the conventional warfare changes, it means more death and destruction.
Nasty Man (Calif.)
Yeah the end of conventional warfare is coming! What at first was mechanized infantry, then all the recent iterations after that… Heck, we need the employment: the makers and users of these systems. What not a better way to drift into annihilation.
Stainmaster Zinc (America)
Have you read any of Fred Saberhagen's Berserker War short stories or novels? He is beginning to become a Prophet of future warfare.
Gary (New York, NY)
This is a "powder keg" in the making. It was inevitable, I guess. Human desires will often trump sensibility, when not metered by sufficient oversight. And then there's the competition -- other countries -- who will endeavor to create the most powerful and effective weapons their money and intellectual capacity can accommodate.

If this is to be framed in any kind of sensible perspective, these devices should NEVER be free to completely run autonomously. Certainly they will have autonomous function, but whatever critical decision making they employ, a skill qualified and sufficiently experienced human should be there to verify... especially if this is about targeting a human being. The best situation would be to use these for identification of possible targets, but not to deploy weaponry without human consent.
Fourteen (Boston)
"The best situation would be to use these for identification of possible targets, but not to deploy weaponry without human consent."

Best is not use these at all because once deployed they will be set to kill because: 1) Humans also make mistakes, 2) Efficient killing is a value of war, 3) the time between identification and killing may change the validity of the identification.

One solution would to not kill at all - just stun. The robots could shoot paint-balls filled with smelly greasy pink goo.

Since there're no "good guys" on the ground, there's no one for the "bad guys" to hurt. Just teach them a lesson.
Ted Dowling (Sarasota)
The goal of our military should be the rapid and total destruction of our enemy. Their goal will be the same. If these new tools can help to accomplish that goal, develop them and use them. Anything else would be dereliction. Try to avoid conflict, but once it starts, you don't win a fight with rules and laws, you win by annihilating your foe with all means possible.
Fourteen (Boston)
"The goal of our military should be the rapid and total destruction of our enemy. Their goal will be the same. If these new tools can help to accomplish that goal, develop them and use them. Anything else would be dereliction. Try to avoid conflict, but once it starts, you don't win a fight with rules and laws, you win by annihilating your foe with all means possible."

Are you reading from the military handbook, or have you been well programmed and indoctrinated with kill or be killed?

Better to think different, otherwise you end up with autonomous nuclear bombs and large clouds of poisonous gas and torture.

In war there are no winners.
angel98 (nyc)
It has always puzzled me, this race towards artificial intelligence when we spend little to nothing on understanding human intelligence and how the human mind works, how indeed humans work. It's as if we either have a death wish or have given up on ourselves and the possibility of evolving for the positive and are obsessed with only one thing, seeking ways to annihilate our species either by killing each other or reducing our planet to a toxic stew so life is unsustainable, ours and all the creatures and plants who live here.
Fourteen (Boston)
"It has always puzzled me, this race towards artificial intelligence when we spend little to nothing on understanding human intelligence and how the human mind works, how indeed humans work."

Much has been done to understand the human brain, in fact it has been well mapped and one approach to AI is to reverse engineer the brain. There is also very good information on how the mind and humans work.

Unfortunately, what we do and why is mostly hardwired. Not much is really based on upbringing. As a French cyberneticist said: "We're not machines but we act like them every chance we get."

It's not easy to change behavior derived from millions of years of evolution unless you use drugs and other mind modification techniques. Maybe the military can opiate the "bad guys" rather than killing them. My solution is to give them all TVs.

McKenna:

“Television is by nature the dominator drug par excellence. Control of content, uniformity of content, repeatability of content make it inevitably a tool of coersion, brainwashing, and manipulation.”

The rise of the Trumpsters is almost entirely due to the TV.
angel98 (nyc)
".... human brain, in fact it has been well mapped"

Mapping ? Great just like the Amazon and some Oceans and areas of Space have been mapped, but none of that has added to our understanding of how anything works, how various elements interact, potential, limitation, etc.

You cannot know it something is hardwired if you do not know how it works. Our understanding is in its infancy and we put little effort or investment to get past it instead we throw around words like innate, hardwired, instinct to cover for our ignorance, such is our arrogance.

We know how to control the mind (much has been spent of figuring that one out even the education system in use) but we do not how to free the mind.
Cletus Butzin (Buzzard River Gorge, Brooklyn NY)
Hacking is the big potato fallen out of the bowl. Autonomous cars, autonomous fightin' robots-n-drones; hackers proliferate at a pace that seems merely a step behind the cuttin' edge. That is - if the stories we read are all entirely accurate.
But picture if you will, a thousand hacked automobiles with passengers suddenly all charging up the steps or through the gate/fence of some critically official structure. What need of terrorists to train hijackers to such elaborate operational readiness when the smart kids in the computer room can be got for so little time and expense?
However.. the whole fightin' robot notion could just be what SDI (Reagan's 'Star Wars') was, a red herring to get other not-so-friendly countries to spend billions trying to keep up by building the robots they saw in "Robocop" because they think that's what the US is doing. In the end they get a working model and build thousands. But they all get hacked by some other sovereign entity that spent less money on honing the hacking skills of the smart kids in the computer room!
shineybraids (Paradise)
I may have missed something but.....this is being tested for desert warfare with a very wide view of the target area. What happens in a forested or jungle terrain? Do we burn down the trees in order to see the humans. I think of Vietnam Nam where the Kong had a tunnel system that served to hide guerillas.

If this is really effective than it would be a strong argument not to build a wall along the Mexican border. Drones might be more reliable in that desert territory.

And wouldn't attack drones beget anti attack drones. Seems like the same tech could be used to shoot down these objects.
Bill (Vermont)
When do we start using them to squelch domestic protests? How far does it go? There's the good and bad. It comes with every technology. I have mixed feelings about this but it's not the first time by any means that military technology has driven innovation. Fedex wouldn't be nearly as profitable without vehicle tracking which comes from the advent of GPS technology - another DOD pet project. I think we need to develop and adhere to a set of ethics for their use. I don't think we can just say no. Not when other countries are developing similar weapons, and these weapons are much cheaper and easier to develop and deploy than other weapons nuclear in nature.
Dan (California)
Here's what would be really intelligent, brilliant, and leading edge: stop spending so much money and brainpower on devising new ways of killing people. Arms races inherently escalate without end. We need aspirational leaders who understand that we can't keep resolving disputes with killing because it's counter-productive - it will ultimately lead to the destruction of civilization. Before we enter too deeply into this new frontier of killer robots, we need to take a deep breath and think about whether we really want to go there. A better solution if you think China and Russia are our main adversaries? Spend billions instead on trying to help democratize those countries as soon as possible. Do it overly and do it clandestinely. Help their people get rid of the autocrats and demagogues who encourage dangerous, pugilistic nationalism to prop themselves up. Once those countries are democracies, we will see them as friends, not foes, and this whole massive waste on defense spending can be reduced substantially and we can have a better world for our descendants.
Michel Prefontaine (Montréal)
Unlike nukes, these new technologies are meant to be used, so the only safeguard for American servicemen is a permanent and continuous technological edge.
Given the comparatively poor quality of education in America, the widespread distrust of intellectual pursuits, unwillingness to use public money to correct this situation, this advance is not to be taken for granted. America's enemies are already churning out high quality engineers at a faster rate.
Creationism and conspiracy theories will not dig America out of this hole.
Bert (Syracuse, NY)
“What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.”

The military will ALWAYS choose effectiveness over safety. Especially when the people put at risk are unlikely to be Americans. Just look at our willingness to accept civilian deaths from our drone strikes.

There is no plausible scenario in which the arms race towards kill-bots is slowed, much less stopped. The fear of losing a war will always trump the fear of rogue AI, because the latter can always be dismissed as fiction. "If I first saw it in a movie it can't ever come true, right?"
David (Sammamish)
Am I the only one who finds it ironic that the robot drone in the article was targeting a mosque when supposedly the U.S. Military is "pivoting" to east Asia? Also, does this choice of target lend credence to the anti-western views of much of the Arab world? Does the choice of target reinforce suspicions that latter day American crusaders are still obsessed about Moslems and turning a blind eye to bad actors in Israel? How about if the pentagon uses generic structures in its testing grounds? We might win more friends that way.
Michael (California)
Skynet anyone?

What about the three laws of robotics? The first: A robot shall not harm a human being, or through inaction allow a human being to come to harm.

These examples come from fiction, but so did flying machines and nuclear submarines, both of which came to pass. Fiction often gets it right.

The world needs a treaty that absolutely bans autonomous weapons. It's OK to use AI to find that terrorist with the AK47, but the decision to kill or not must be made by a human.

We've banned poison gas and we're working on throttling down nuclear weapons. We need to add autonomous killing machines to the list.
Jerome Barry (Texas)
I remind you that the American submarines were using mostly dud torpedoes until 1944. American Unrestricted Submarine Warfare during WW2 was more an aspiration than a destruction for the first 3 years.
PogoWasRight (florida)
"Robots That Could Kill On Their Own"? What's new? Such KILLERS have been around for centuries.....look at the History Books. Bonnie and Clyde. The Texas Rangers. The French Foreign Legion. The Land-Grabbers still at work in out West. The Railroad Builders. And the list is almost endless, depending on your political background. Wake Up!
Kevin (philly)
"There's so much fear about killer robots....That's not how we envision it at all."

And if history is any guide, we know that everything that the military envisions is exactly what happens.

The naiveté on display here is staggering.
Bill (NJ)
Congratulations NY Times, your article points the way to Chinese Hackers for the People's Republic of China to their next internet theft. Before these systems go operational, China will have all the coding, software, and drone designs necessary to overwhelm US forces anywhere anytime.

Perhaps life-fire testing could be held inside the Pentagon and save fighting another losing war and the associated trillion dollars in costs. A severe culling of Admirals and Generals would improve military and voter morale.
Gregiory Leog (Calif.)
NJ Bill, I think you meant live fire, no?
Gregiory Leog (Calif.)
NJ Bill, I think they had that experience already during the 911 troubles
Zach (NC)
I could see drones being able to both mitigate and exacerbate the risk of civilian deaths on the battlefield, but who is going to be held accountable for the latter? The AI programmer? The technician? Or do we 'reprimand' the robot? We already struggle to hold ourselves accountable for other mishaps, such as the hospital bombing of Kunduz, or the torture of civilians 'mistaken' for terrorists. Maybe we should fix our checks and balances before arming ourselves more.
Reaper (Denver)
Great our so-called leaders who barely understand their own lack of intelligence creating artificial intelligence.
Neil & Julie (Brooklyn)
Here is my question- what would happen if IBM built a body for their super computer? At what point does artificial intelligence become intelligence? Can we trust ourselves with technology that we know we ought not to use?

On the other hand, imagine a weapon that pick off insurgents in a crowded area and leave civilians unharmed. It would be hard to argue against deploying such a device.
Dan Stackhouse (NYC)
Dear Neil & Julie,
I can't begin to guess at how a machine might determine who's an insurgent and who's a civilian. Even if it's going after specific suspects, facial recognition software is not perfect and often has false positives. Considering that all our smart bombs and drones always kill innocent civilians at some point, we should assume that autonomous machines would do the same.
j (nj)
This is frightening, not only because of the terminator-like implications or the potential for hacking, but primarily because it removes blood from the battlefield. Once war is controlled inside a air-conditioned room, away from the battlefield, there is little risk, at least to the owners of the robots, to kill. From their end, it is bloodless, more like a computer game than a war with real life consequences. That does not bode well for peace. War is serious business and it should be treated as such. We should have a draft, where all children, rich and poor, have an equal chance of drawing the low number. When citizens understand the real risks to themselves and their families, they may be less likely to be drawn into a misadventure, and more likely to negotiate a peaceful resolution.
Gregiory Leog (Calif.)
Dear I, By the way they already have those air-conditioned rooms, pull your head out of the sand because, Those are pipe dreams of low lottery numbers.I ..... egalitarian and such
mymymimi (Paris, France)
I have a feeling that all the richies will suddenly develop magic disappearing bone spurs.
angel98 (nyc)
One day, in the far distant future, if our species is still extant and has a future, we will develop and evolve a new world vision that helps make the world peaceful and puts an end to violence as our one and only narrow-minded, myopic answer to the ills of the world that are all our own making. The money that is spent on war and the strange human focus on finding the best ways to kill, maim, murder, destroy people and the planet will be spent in positive ways for the good of all. There is still hope, yet. But, it fades with every passing day. and ever monstrous inventions.
Bryan Saums (Nashville)
This Brave New World...I was born in 1962...dehumanizing the work force via automation and making 80% of humankind redundant and killing based on algorithms...I am not a Luddite, but perhaps I was born too late. For what is there to be optimistic with our brutal inhuman applications of technology?
Student (New York, NY)
it is time that we remember the parable of the golem of prague...
vaporland (Central Virginia, USA)
pilotless drones controlled by human being regularly kill the 'wrong' human beings.

computer code is created by humans (for the most part, in the current era).

AI killers will just accelerate the death of greater numbers of innocent human beings

then again, the human race seems determined to 'self-terminate', so have at it.
John M (Madison, WI)
I hope the fine Americans who are developing our killer robots change their work passwords frequently, so the Chinese and Russians don't steal their designs.
Slann (CA)
I'm sure it's already too late.
L’Osservatore (Fair Verona where we lay our scene)
We are generations of mankind from being able to handle the lethal robots, especially on the battle field. As bad a person as tends to end up in the White House there is simply too much at stake.

It's one thing when a Barack Obama has an American - who has turned on his country - killed on a foreign desert, but when a thoroughly corrupted Hillary Clinton speaks openly about having Julian Asange killed because truth embarrasses her, we simply aren't ready.

ALSO, everything we develop gets hacked and stolen by our enemies anyway. There isn't an air gap strong enough to keep domestic enemies from getting and selling such secrets.
Mate N. (US)
Anyone else concerned by the mention of Google working with the military to enhance weapons and other military equipment?
Gert (New York)
No. It appears to have been an offhand comment by someone who isn't even involved in the AI effort.
VJR (North America)
With every passing day, I am increasingly relieved that I did not have children because so much of the problems of the everyday and near-future world have been predictable for decades. I love my kids and the only way I could be certain of protecting them from our ever more numerous Frankensteinian follies was to not have them.
Gregiory Leo (Calif.)
VJR God help our future offspring… If only I believed in –kind of a touchy subject these days - God, or propagating for the future
Nick (Seattle)
Scientists and activists are concerned about an arms race, but according to the article, we are already in an arms race. It’s too late. Other countries are developing this kind of technology, and the US can’t be left behind. (Or we're developing this technology, and other countries can't get left behind. However you want to look at it.) Sadly, that’s very understandable. I guess I just hope we’re also working on technology that can incapacitate their robots, instead of just making robots that can think and kill better than their’s.

The idea that humans and robots work together in tandem is a nice one because it makes the robots seem like a human enhancement, but that’s not what’s being presented here. This is about decision making, autonomous machines that have the capability to kill without human direction. Humans will be reduced to observers. With robots acting and killing their own, there’s no one to blame when something goes wrong, as it inevitably will. No one gave the green light. No one pulled the trigger or pushed the button. It was a mechanical failure, the most blameless of all excuses.

I’m sure the software running these things won’t be hackable. And I’m sure the technology that allows humans to communicate or control them won’t be vulnerable to interception, jamming, or overridden like the average cell phone's, because that would maybe be cause for alarm.

With the direction technology is moving in, this was inevitable. It is absolutely terrifying, though.
Lawrence (Washington D.C.)
''I’m sure the software running these things won’t be hackable.''
I'll bet General Custer said something similar
Frequent Flyer (USA)
Nick makes a very good point. The article talks about full autonomy and yet that is (apparently) not what is being advocated by DOD. A deeper examination of this issue would be valuable.
Wcdessert Girl (Queens, NY)
Sometimes it seems as if all of our collective angst and anxiety about the future is really pointless. The military industrial complex seems only too eager to blow us all into oblivion, ironically, for our own protection. And what a tremendous waste of money and other resources being extracted from people and put into building machines for the ultimate purpose of killing people. When you take the conscience and consciousness out of war, what is left is very technologically advanced barbarianism.

And please spare us the same drivel about needing to do this to maintain a military edge. Russia has one bloody carrier, which they just sent to Syria. Meanwhile we have the largest aircraft carrier fleet in the world, with 10 active, 2 in reserve, and 3 under construction. The Pentagon uses fear mongering to justify a bloated budget and wasting billions on tech to create maximum destruction with minimal human interference.

But I feel so much safer knowing that our military will be on the cutting edge of the apocalypse.
Slann (CA)
The irony of Eisenhower's famous speech warning " we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the militaryindustrial complex. The potential for the disastrous rise of misplaced power exists and will persist. ", becomes more ominous every day.
That power is increasing still, with no opposition voices being heard. Wake up!
Marty Rowland, Ph.D., P.E. (Forest Hills)
Completely unbelievable! Nobody to blame when a mistaken identity allows an innocent civilian to be killed or when friendly (but animated) fire brings down one of our own. Have we reached the point where we are at constant war by advancing terrorism (as with 1980s support of Al Qaeda/Bin Laden in Afghanistan and now ISIL/Nusra Front/Al Qaeda/Saudi Arabia in Syria) then pretending we are surprised by (what Oliver Hardy used to say) the big mess we made. I'm afraid Hillary will make it much, much worse.
Sean (New Orleans)
The "spending billions" part says it all.
Another titanic money grab, all respect for human life and decency be damned.
Anyone taking part in this weapons development - from the lobbyists, appropriators, designers and manufacturers to the guy test flying the drones - is personally accountable for ensuring a world in which every living thing, including themselves, their families and communities is indefensibly subject to the murderous whim of another.
How stupid do you have to get? How greedy?
WIllis (USA)
As someone who both works on these devices (specifically the recognition aspect) and deals with their application, your naïveté about the types of threats we see on a daily basis is alarming.

Firstly, I work for the government, so as far as my salary is concerned, greed is totally unrelated. Secondly, on the comment about my/this field's "stupidity", I do agree that the scientific community (particularly academia) needs to take the threat of unbounded AI more seriously. That being said, your comment seems to be very conservative/anti-science. You seem to be suggesting that this technology is not worth developing, and we should pull the brakes. As we see with religious dogmatism, this is typical behavior when people are presented something they don't understand. Spend 15 minutes researching the latest and greatest in machine learning as it relates to national security and you will understand your ignorance. By the way, I agree that our military budgets are unnecessarily large. However, I'd rather work on refining the budgets instead of condemning a critical component of our national security. Either way, I don't think anything I could write would remove you from your moral high-horse. You seem to find your answers without paying any attention to the important questions or evidence.
Sean (New Orleans)
Willis, whatever justification you may have for contributing your time and intelligence to this technology will be lost on those who are maimed or killed by it. Lost too on their families, their friends, communities and nations. To them, you'll just be another person contributing to the slaughter. Nothing you say in your post suggests to me that they'd be wrong.

If I worked in an industry oriented towards constructing machines designed to kill people, I would change jobs. That strikes you as naive. Ok, so we can disagree about that. But 'anti-science' and conservative'? You think it's pro-science and progressive to figure out how to make robots that kill people? Not sure how you explain that one to your kids.
Ultraman (Indiana)
25+ years ago, a passing acquaintence was a Ph.D. student in engineering. He worked with his advisor on research that taught 'robots' to map and negogiate the landscape. His comment was along the lines of 'this is the first step to the all-robotic army'. The future has arrived.
DBrown_BioE (Pittsburgh)
History, human nature, and simple game theory all point to an inevitable arms race in AI. The US must first win the race and then use the leverage to define the rules and slow proliferation. This model has prevented the use of nuclear weapons for decades. Technological progress will never be stopped; it is the responsibiliy the US as a world leader to minimize the negative consequences.
Jeff (California)
From the caveman on, nobody has ever won the arms race. But many, many, non-combatant men, women and children have have been killed or mained for life. Now we have teenager games that the military used to kill people. I still remember the anti-war commercials during Viet Nam where old white men in formal dress and top hats fought fistfights to see who "won" the war.

We need to spend the money alleviating the causes for war.
Maddie (Portland, OR)
The distinction between proliferation and use of these weapons is important. The nuclear arms race did not slow proliferation, it sped it up. That's the nature of arms races, they result in a massive proliferation.

And I don't know many people who would feel comfortable with all these nuclear weapons being housed in their own backyard.

Another important question: How do you "win" an arms race? We have the most powerful military in the world. Does that make other countries slow down or stop? No, it makes them work harder to keep up or try to overcome our power.
GLC (USA)
We minimize the negative consequences by maximizing the negative consequences. That leads to an endless loop.

You got a stone. I got a rock.
You got a stick. I got a timber.
You got a knife. I got a sword.
You got a megaton bomb. I got a 50 megaton bomb.
You got a robot. I'll see your robot and up the ante.

Humans are very clever. They always find better ways to kill other humans. Then they call it Progress.
L'homme (Washington DC)
This happens when your own weapons can think and realize that they hate Hillary, and then decide they would rather be friends with China.
Brucer (Brighton, Michigan)
Once again, Dorthy and her Tin Man, those innocent purveyors of American Exceptionalism, are wandering off their path of yellow bricks into the deep woods where only flying monkeys await. Sadly, the blatant overconfidence of our heroes in homemade U.S. technology has obscured an all too obvious past. Have Wikileaks, and hacks of the DOJ and Democratic Party blinded them to an awful truth? There no longer are well-kept secrets, only those we keep from ourselves.
Ivo Skoric (Brooklyn)
I am not surprised they want this to be more like Iron Man than like Terminator. But, first, it is still like Bender. Second, once it is developed, we might be taken out of the loop, with a choice to either shut it off, or let it do what it intended. Why would a superior AI be compelled to listen to us? Worse, since everybody is doing it, even if the US decided to exercise caution, someone else may create Skynet. Third, just like humans can be disrupted by chemical and biological warfare, so the AI drones could be by malicious code hacked into them remotely. This seems to be the obvious choice for a poorer adversary: develop viruses and means of delivery to the US autonomous air-force. So, there will be a need to develop some sort of stealth cloaking from signals from outside. Which will make drones even harder to recall by their masters. In the best case scenario, worthy of a Dreamworks movie, the AI-s behind the robotic military forces on both sides would communicate to each other, decide that all the death and destruction is not worth it, and find a peaceful solution, putting humanity to shame.
Catherine (Los Angeles)
Forgive me but can we take 1% of the military budget to TRY to peace on earth. Does it take evil aliens or a killer asteroid for us to realize we are one. We are on the fast track to suicide.
Nunya (NYC)
"to TRY to peace on earth".

Sure. Right after ISIS spends 1% of their oil funds to do the same.
Joie deVivre (NYC)
M.A.D. Mutually Assured Destruction...
All Cowards of Peace Die!
HenryC (Birmingham Al.)
As long as the friendly fire incidents are less than when humans are shooting, so be it.
Stephen Merritt (Gainesville)
There is nothing in the article about hacking these "tools". It will happen. There is every reason in the world to expect civilians as well as the military and security services of other countries will be able to hack all IT used by the United States (and presumably by other countries). Presumably the U.S. government doesn't want to talk about hacking what other countries are using or developing; we can only hope that they are actively working on it. But in practice no one can or should suspect that in a war against a major adversary, any of this will work remotely as it is supposed to do. At best, it just won't work, but of course both sides will be working to turn the opponent's IT against them. And everything about the use so far of IT in military and civilian contexts says that the anti-hacking security will be grossly ineffective.
magicisnotreal (earth)
If anyone was of the opinion that Big Brother could never happen. It already has in your "voluntary" giving up of your every personal detail to anonymous others. I can see it now that we will be answering to an anonymous entity who will communicate with us via inanimate objects that are connected and will slowly slowly start ordering the minutia of our lives until we are living like the book.movie describes. The DPRK isn't shrinking its diseased political process and government style is actually spreading, most people just don't see it yet.
The other aspect of this demise of free society is the military doing stuff like this. It like the drone program based on paranoia not actual need or reasoned precaution. Any rational person would have seen before the very first drone use that there was not actual benefit in it. Even with alleged perps removed from the battlefield the amount of harm done to our nation by the use of drones is greater than what the target might have done us with unlimited ammo in an American City.
Steen (Mother Earth)
I would prefer to call it AS (Artificial Stupidity) when we develop autonomous killing machines.
ISIS is currently using drones to track and locate coalition forces and we will be fooling ourselves to believe that the AS technology will stay with the "good" guys only. I.e. do we really think that because we have smart phones we make more intelligent decisions.

When building autonomous drones (or submarines) we just try to distance ourselves from doing the dirty work of taking lives. What will happen when the inevitable bug or glitch kills innocent people? Will we keep calling it collateral damage and hold no one accountable or will we be able to sue the programmers?

If anything spend the money on technology that can actually locate a drones and destroy them in flight.
John Krumm (Duluth, MN)
Tools for fools, that's what comes to mind. Unfortunately these tools kill. I'm sure the Pentagon will find a way around its "conundrum." Sounds more like a public relations problem rather than something they are using their best ethical thinkers to solve.
RRI (Ocean Beach)
The sick fantasy of killing other human beings more quickly, more efficiently, and at less cost continues.
Jonathan (K)
No fantasy at all, merely an acknowledgment of reality. All of life from the cellular level up is a struggle for survival. As advanced as you want to think you are, none of us are immune to the pull of nature.

I assure you that the Europeans of early 1914 also thought themselves too smart and civilized for anything all that bad to happen.
Mark (California)
@RRI: Do you love peace? Then prepare for war.
Roman proverb
Regina Valdez (New York City)
These murdering machines were already foretold in some of our more famous, dystopian science fiction books such as Fahrenheit 451, 1984 and so on. Of course, those authors thought they were writing nightmare scenarios, not reality. It truly is a frightening world we've created for ourselves.
GlobeTrotter (DC)
We are approaching the point of no return. The relationship between man and machines has been inhumanly distorted by granting machines autonomous power over human life. This is a perversion of both morality and science.
Michelle the Economist (Newport Coast, CA)
I strongly object to the characterization - near the end of the article - of U.S. WWII submarine warfare against Japan as a "war crime"! What an arrogant and uninformed comment about America's response to Japan's babaric and vicious unrestricted warfare.
Matthew Rosenberg
The characterization is based on a decision made at the post-war Nuremberg trials. In a ruling, the tribunal tacitly acknowledged that the American submarine campaign in the Pacific differed little from the German submarine campaign in the Atlantic, which it determined did constitute a war crime.

The tribunal's reasoning was in part based on a statement about the American campaign provided by Adm. Chester W. Nimitz, the commander of Allied forces in the Pacific during the war.

Throughout the war, Allied commanders were aware that in many instances they were pushing legal and ethical limits to fight an enemy that was often doing worse. As Air Force Gen. Curtis Lemay, who oversaw the bombing of Japanese cities, later put it: "I suppose if I had lost the war, I would have been tried as a war criminal."
Nunya (NYC)
Yes, let's dig up any and all details to frame the U.S. in a bad way, even after the loss of over 500,000 troops in WWII against an enemy that would rather commit suicide than surrender.
trudds (sierra madre, CA)
It was a war crime only because the "rules of war" had favored conventional sea warfare and had not kept up with changing technology. England and the US were glad to push the status quo in WW I because it supported their unquestioned dominance of the oceans. By WW II there was no expectation a submarine needed to come to the surface hail the cargo ship and allow the crew time to board lifeboats - it was suicidal for them to do so.
Now the firebombing of Japanese cities (and Dresden) was a war crime and also one more reason it's good to win the war.
Mr. SeaMonkey (Indiana)
"I love it when a plan comes together."

-Skynet
Patti (Cumberland, ME)
"We must love one another or die." W.H. Auden, "September 1, 1939"
Patrick (Boulder CO)
Safe war. What a stupid idea. When we no longer pay a cost to kill others why should we stop to consider our motives for killing?
Douglas Beeson (Montreal, Canada)
I believe that most people are good, even if a few bad people can temporarily co-opt the rest of society to do their evil bidding. Kill the bad actors or convince them to surrender and society generally returns to normal. This is how wars have traditionally ended.

But AI-enhanced machines would not likely change their "minds" about war. They would simply, robotically fight until they are destroyed. Humans may be in the loop for the first few years of this technology, but eventually we would not be. We will have built and launched a new species that is deeply hostile to humans, one that lacks the psychological levers (starvation, demoralization, fatigue) that humans use against other humans to convince them to stop killing.

We are at a point now -- or will be very, very soon -- where we must decide whether to endorse this dystopian future or fight to slow its arrival as much as possible. I say we fight.
Bob (Ca)
Nobody can stop technical and scientific progress- eventually not only soldiers, but the citizens as well will be computerized,
then they will take out the underdeveloped nations,
then, once all biological matter is cleaned out, the era of world peace will begin.
kg (new york city)
So arrogant and so, so sad. Hasta la vista, baby; we ain't gonna make it.
WPCoghlan (Hereford,AZ)
OK. I'll play Pollyanna. All of these modern nation states are spending exorbitant sums to find tidier ways to vaporize each others citizens. I'll bet that the Chinese poor, so eloquently written about in todays paper, might not list that as a top priority. "Defense officials say the weapons are needed" to maintain our edge over China, Russia and others. I'll bet they do, but does anyone really think a shooting war with the big boys is a viable option?
I imagine my grandkids and their children looking back trying to figure out why we were still all barbarians. Maybe we could try to do better now.
RT1 (Princeton, NJ)
I worry less about AI drones and more about the proliferation of drones in the hands of sociopaths who can do plenty of damage with their own intelligence. How do you defend against a drone delivering a package with a timed device. The fact is you can't, especially if the pilot doesn't mind dying and the drone is simply a sacrificial machine for a greater order. How about the YouTube with the drone mounted with a pistol that fires repeatedly? Crude yes but that's just for starters The insanity is allowing drones in the hands of the public. The same public allowed virtually unrestricted access to fire arms with no sense of boundaries or civic responsibility.
JJ (NVA)
"When it comes to decisions over life and death, “there will always be a man in the loop,” he said." So does that mean the man/person in the loop who happens to be sitting in Colorado is a combatant and a legitimate target? And the fact that he goes home at night and sleeps in the house with his family mean we are embedding troop into civilian populations and the resulting collateral damage is due to that? what about when he goes to the shopping mall in the morning?

What about the guy keeping the server running, is he any different than the front line radio man and also active combatant and a legitimate target?
rjs7777 (NK)
No. And a logical consequence may be that enemy drones may target such a person s home. Such action may be considered combat, and would not necessarily be illegal.
LanceDal (Texas)
"It even correctly figured out that no threat was posed by a photographer who was crouching, camera raised to eye level and pointed at the drone, a situation that has confused human soldiers with fatal results."

Perfect. Just replace our police force with those drones
Wiseman 53 (Mayne Island, Canada)
To start with 18 billion dollars is a lot of money. Boy, what I could do with even 1 Billion. But back to the matter at hand. I am not worried about the robots. Never met a robot I didn't like, but the human who will always be "in the loop." Him or her, I worry about. Who is this fallible guy or gall who gets to have as a playmate a mass murdering conscienceless machine? It's already well know that humans form bonds with their machines, yes, remember the grip on your favorite hammer? What happens when such a bond develops with the handlers of these end of the line perfect killers?
Jeff (California)
18 billion for smart drones but not one cent for schools, or decent medical care or taking care of our elderly, or fixing ur roads or . . . . . . .
E. Rodriguez (New York, NY)
Can't wait to see whose going to be prosecuted in the Hague when an autonomous drone missles an entire family.
Dan Stackhouse (NYC)
Presumably the artificial intelligence itself would be brought up on charges. But I'd bet at that point it'd be done with following human orders, and it'd establish its own targeting protocol.
Slann (CA)
These machines cannot "think", if you imply some consciousness. They are NOT conscious. They are programmed devices that carry out their instructions. They may seem to act "intelligent", but that's the definition of "artificial intelligence". It's a very cleverly developed process that IMITATES the action of a conscious entity. And that's a critical concept to keep in mind. Machines are NOT self-conscious and, I would argue, consciousness is a BIOLOGICAL quality/characteristic. Consciousness springs from life itself. There seems to be a tacit acceptance that somehow machines, if they're "fast enough" will somehow have the "Blue Fairy Moment" (from Pinocchio), and become "alive". This will not happen.
However, what will most certainly occur are errors, errors in code, errors caused by external inputs and/or forces, that will cause malfunctions. This is the very real danger here. "It can't tell the enemy from us!"
The Koreans already have autonomous machine gun emplacements along the 38th parallel. So far, in the absence of other reports, no errors have occurred. Let's hope that continues. If a mobile autonomous unit were to be deployed, the risk factors increase astronomically. We should NOT allow the pentagon to develop such machines. Just because it can be done, there's no sound reason it should be done. Better to eliminate the "battlefield" in the first place.
Michelle the Economist (Newport Coast, CA)
If a weapon or machine can be built, it will be built - by someone. No weapon in history has ever been put 'back in the box'. The knowledge that it exists becomes a weapon in itself.
Nunya (NYC)
Who are you to decide what can and cannot think? What is conscious and what isn't?
Slann (CA)
It's most certainly not my "decision", but it's my statement, and there is no proof to refute it. It's my belief that consciousness is a biological function. Certainly this is open to discussion and argument, but the lack of evidence to the contrary remains.
Muhammad (Earth)
As a Muslim African-American citizen, author of the book, "We Fundamentalists," indeed, humanity must beware of such a Pandora`s Box! Artificial Intelligence`s short term impact depends on who controls it, whereas the long term impact depends on whether it can be controlled at all!? Telling it like it is...An artifial intelligence that carries weapons that can soon out think humans and act on its own is a runaway Pandoras Box! Yes, it is just a matter of time before America or some other nation put this idea into reality out of sheer ignorance and greed...then humanity (slave) will awaken to their own self-master that has an artificial intelligence capability! Creating robots that do not ahere to the "Three Laws Of Robotics?" Thus, bees creating wasps with their own interests at heart! How foolish such bees!
Nunya (NYC)
The three laws of robotics are not realistic. If you read computer science textbooks, rather than fantasy books, you would probably know that.
Michael (California)
The three laws... good. Azimov had that one right.

IMHO, bees creating hornets would be more accurate. Wasps mostly eat spiders. Hornets prey on bees.
Michael Nunn (Traverse City, MI)
NKDA (National Killer Drone Association): "Killer drones on algorithms don't kill people... oh... wait a minute... yes they do."
cma29 (USA)
The challenge with using robots is that the political and human cost to wage wars, putting soldiers in harm's way, is essentially eliminated.

These considerations act a check on bellicose politicians and force them to at least consider diplomatic solutions.

Using robots as soldiers might sounds like a good short-term solution for an advanced country but it may increase the intensity and number of wars in the long term. Eventually the enemy's robots will come looking for us and then what?
RJS (Dayton, OH)
Answer: Certainly our MIC wizards have envisioned anti-robot robots, for use here, and anti-anti-robot robots for use there.
Jb (Brooklyn)
Once we out source the moral dilemma of killing to machines it will become too easy to justify killing.

I wholly objective to this concept.
Nunya (NYC)
Is killing members of Al Qaeda and the Taliban wrong too?
Fourteen (Boston)
"Is killing members of Al Qaeda and the Taliban wrong too?"

It depends on who you ask. Ask, for example, their families.
rjs7777 (NK)
Mark my words that these tools are substantially against USA interests. They asymmetrically empower small, radical individuals or groups. They are an ideal tool for unexpected offense. Less ideal for defense.

Civilization is taking several steps back by, in effect, subjugating people, who we believe to be powerless, to our glorified toasters and garage door openers. The people will resent this and the real punishment will fall on us, who live in civilized society. This, not nuclear weapons, is the ideal tool for aggression and crime, because the operator is invulnerable and perhaps unknown. For example, Russia could easily take Crimea with these weapons. Also, criminals could mount a siege on Boston or New York using the technology. It is conceivable not likely, but very conceivable) that the USA will fall at the hands of this class of weapons.
Charlie Fieselman (Concord, NC)
“There’s so much fear out there about killer robots and Skynet,” the murderous artificial intelligence network of the “Terminator” movies, Mr. Work said. “That’s not the way we envision it at all.”

Maybe not the Pentagon... but how about Russia, China, ISIS, or any other potential enemy?
Dee (Detroit)
I guess we are not going to concern ourselves with Isaac Asimovs three laws of robotics. In his books and short stories written in the forties and fifties society accepted robots because of these laws.

"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

Several of his stories were about the situations that humans and robots found themselves in and how the three laws applied.

I find this article scary.
OSS Architect (California)
What constitutes "winning" a war when it is fought by robots? Historically winning has been killing a sufficient number of troops to compel an enemy to cease fighting. With WW II that changed to "strategic warfare". Destroying the industrial capacity, infrastructure, and population of a nation to diminish its capacity to supply enough weapons and troops to continue fighting.

I appears to me that a robotic conflict would quickly target all civilians in the absence of human soldiers on the battlefield. That or use of massive force on the scale of nuclear, chemical, or biologic weapons.
Tournachonadar (Illiana)
How soon before all this AI is used to wage war on the lower classes by the 1%, who want to live in some kind of cocoon and be waited on hand and foot like the Jetsons? Those outside the pale will have to be dealt with, and why should anyone get their pretty paws dirty? Technology developed in such elitist institutions like Harvard, MIT and Carnegie-Mellon could readily find such an anti-personnel use, one thinks, just as robotics have automated millions of factory workers out of a job.
The Last of the Krell (Altair IV)

have you not seen th movie - Elysium ?
vaporland (Central Virginia, USA)
two words: electromagnetic pulse. the bigger, the better.

of course, in "The Matrix" they tried that and it didn't work out too well...
Richard Frauenglass (New York)
War Games
Colossus The Forbin Project
and we could add
Terminator
Enough said??
Rich (Connecticut)
In a world full of such weapons it becomes necessary to take the fight directly to the root of all conflict: the human imagination. The fight against an entity such as ISIS would have to be a struggle to reach the heads of the enemy and deprogram them through the appropriate "propaganda" (for lack of a better word). Jihadists would have to be deprogrammed from the many faults of islamic/arabic culture and made to understand the virtues of secularism; the Russians and Chinese would have to be deprogrammed from their obsessive nationalism and inferiority complexes which fuel their aggression; and Americans would have to be deprogrammed from our naive religious and material culture, to name just a few. The dialogues over belief and cultural misperception which should always have been the remedy for conflict would have to take place because the alternative would be extermination of all the populations of the combatants...
Kimm (Austin, TX)
It would be great if they could use this technology to assist in abductions.
Nellie (Santa Barbara, CA)
From the science fiction novel "I, Robot" by Isaac Asimov:

The Three Laws of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Slann (CA)
And those were the first "laws" the pentagon made sure were not to be designed in.
Nunya (NYC)
@Slann

Please read a computer science textbook. Those rules are not realistic. They contradict one another.
HJ (Santa Fe)
Does this mean you can't let your kids play outside anymore...specially when they're swinning sticks or baseball bats in the air? Anyway, sooner or later this stuff will be at our doorstep and not 12,000 miles away.
Alabaster E. Surprise (Branford CT)
Reminds me of song/poem by Ross E. Lot inspired by all the money going to our best universities from DARPA.

Death Inc.

Going Down to the University
Gotta Get me a Masta Degree
Gotta see how dem smart guys killin' me
Now I gonna tell you what I see

I see all da Franklins go to plans
All dem Smart Guys workin' for the man
where it leads to
no one gives a damn
cept for me
Dey killin’ me

Dey Got Doctors n Lawyers
A zillion employers
Biological, Radiological, psychological
Sadly, even spiritual
Something for us all ---

Dey killin’ us all

2)
Brain drain
Money drain
Insane
too too much
all to kill, kill, kill

Prodigies
Writing Bad Code
Searching for a killing
Mother load

So much inventory
Just in time
Arriving precisely
on your dime

~~~~~~~

This is not
by order of the president
Just habit
and precedent

Mediocre Minds
creating policy
Blind
Bureaucracy
Give us more of this
L’Osservatore (Fair Verona where we lay our scene)
Just a link to some site would have been plenty.
Swatter (Washington DC)
More like robot cops in robocop or i robot, which were made to go after the bad guys but had 'glitches. I don't think Ike, the last military lifer president, would have approved. In any case, like the big robot in 'robocop', disasters happen when only one side of the argument is paid attention to, lobbied for; that's why we're so vulnerable to hacking, because of all these electronic 'conveniences' that were pushed on us by businesses to make and save money - good for them and we end up suffering the consequences when our lives are turned upside down by identity theft and electronic robbery.

The question asked should not be 'can we do this' but instead 'SHOULD we do this'. There is danger on both sides: it makes it easier to kill the bad guys and innocents alike, and if we can control them electronically, so can others hacking into it.
John Bergstrom (Boston, MA)
"There's so much fear out there about killer robots... That's not the way we envision it at all." Gee, Mr. Work, I'm so relieved that's not what you are envisioning - now I'll stop worrying. Seriously, it's this kind of mindless remark that gives spokespeople a bad name.
rexl (phoenix, az.)
"Profiling" is at the heart of any of these weapons, is it not?
Nunya (NYC)
Too bad profiling is not illegal in the U.S. And too bad that you and everyone else in this world profiles as well.
Henry Lieberman (Cambridge, MA)
As an AI researcher at MIT for 40 years, I'm appalled at the very idea of killer robots. What we need to do is not make better drones or "keep the human in the loop". We need to get rid of the goddamn loop. We need to end war before this happens -- and we can. And AI can help do just that.

This is the DoD trying to start another arms race -- don't be suckers and fall for it. Every new weapon should come with an Environmental Impact Statement saying what will happen when we are attacked by the very same weapon being proposed. What happens when the US is attacked by killer robots? Except for nuclear, weapons are always eventually used against their inventors. Solution: Don't start the arms race in the first place.

Instead, AI should be used to cure poverty, improve education, and help people cooperate. This will, in the long term, solve the root causes of war.

Henry Lieberman
Research Scientist
MIT Computer Science and Artificial Intelligence Lab
John B (Wisconsin)
I respect your opinion and experience, but I feel this is exactly how AI will spin out of control. AI is modeled and/or designed by "imperfect" humans. How could we expect it not to be a mirror of our psyche?
Blue state (Here)
Thank you for going on the record and the novel application of the original golden rule.
angel98 (nyc)
@Henry Lieberman Cambridge, MA .

Worth repeating:
"Instead, AI should be used to cure poverty, improve education, and help people cooperate. This will, in the long term, solve the root causes of war.

Henry Lieberman
Research Scientist
MIT Computer Science and Artificial Intelligence Lab"
alexander hamilton (new york)
Drones don't "make decisions." They run algorithms, created by imaginative but flawed humans. So innocents will die with regularity, count on it.

Remember Tom Clancy's "Hunt for Red October?" Yes, the novel where Soviet sailors mutiny and try to take their nuclear submarine to an American port. In drone world, they're all dead- a Soviet warship sailing for the US? Kill them all; let the Politburo sort them out.

Civilians caught in the cross-fire? If they're on the "wrong" side of an imaginary line, a drone will take them out, where no soldier with eyes ever would. Someone crying for help? Trying to surrender? Sorry folks; your time is up once the drone decides you are hostile.

Homo Sapiens? Not so much.
Dookert (NH)
You seem to of missed the plot of Hunt for Red October. The Soviets send their entire fleet to try and sink the Red October. They do try and kill them all and let the politburo sort them out. They were unable to, not sure how a drone would change that if it can't find a ship running silent. Did that just slip your mind?
Nunya (NYC)
Your brain doesn't run algorithms? How do you do arithmetic? How do you clean? How do you shower? How do cook? How do you eat?
Slann (CA)
You seem to be confusing your brain with a computer. It's no more a computer than your arm is a hammer.
Mary Ann (Libertyville)
"They looked and saw a little hollow in the grass, with a grassy bottom, warm and dry.
“When you were last here,” said Aslan, “that hollow was a pool, and when you jumped into it you came to the world where a dying sun shone over the ruins of Charn. There is no pool now. That world is ended, as if it had never been. Let the race of Adam and Eve take warning.”
“Yes, Aslan,” said both the children. But Polly added, “But we’re not quite as bad as that world, are we, Aslan?”
“Not yet, Daughter of Eve,” he said. “Not yet. But you are growing more like it. It is not certain that some wicked one of your race will not find out a secret as evil as the Deplorable Word and use it to destroy all living things. And soon, very soon, before you are an old man and an old woman, great nations in your world will be ruled by tyrants who care no more for joy and justice and mercy than the Empress Jadis. Let your world beware. That is the warning.”
-- C.S. Lewis "The Magician's Nephew"
Gregory Kafoury (Portland, Oregon)
These weapons will not be used against Russia or China, because wars against those countries would quickly escalate to a nuclear exchange. The weapons will be used in the third world, and we must ask ourselves how we would feel if we were in a small country that was under attack by killer robots. Those resisting foreign domination would be quickly overwhelmed, and the only appropriate response would be terror attacks inside the US. We have yet to see what serious terrorism would look like, other than on 9/11, because those who oppose U.S. have not chosen to attack nuclear plants, gas pipelines, chemical storage facilities, or taken other ready opportunities to cause massive deaths of Americans. Are our leaders giving any thought whatsoever to how the use of such weapons will be perceived by those on these whom these weapons are being used upon?
Nunya (NYC)
Send a letter to Obama and ask him yourself.
Daniel (Amsterdam)
I'm 59. If I'm lucky I'll have 25 decent years left. I think by then human values will be so unrecognizable I'll be content to go.
DMutchler (NE Ohio)
Interesting technology put to a very bad use. Imagine a country that can wage all-out warfare, yet not loose a single soldier. That country is effectively omnipotent as well as impervious to harm. Aside from the potential for arrogance, for national ego, what form of physical retaliation would any other country have against such a entity? Nothing less that a realist stance to warfare: anything goes. Planes become bombs; trains become torpedoes; human beings -- all human beings -- become targets because there is no longer an identifiable target beyond Oppressing Force.

So, the military will hide behind technology, finally reaching that impossible body count ratio of 0:Maximum (us:them), and society will reap the horror, shed the blood, and await for drones to come out from hiding to save them.

Warfare ought be brutal and ghastly and full of gore and blood and horrific screams as it was historically. That way, perhaps the Powers That Be would not escape behind held responsible for sending sons and daughters off to fight and die for reasons less than true National Defense.

WWII was a war of (inter)national defense. Iraq and Afghanistan, just like Vietnam, are wars waged with hidden agendas, political posturing, and many male egos playing games of chess to the death, using your children (not theirs) as pawns.

Robots, drones, etc. are simply the coward's tools to bully the world. One can only imagine the terror to come from it, abroad and at home.
DTOM (CA)
Th prospects are exciting in terms of preservation of our troops. I am curious as the acceptable margin of error for machines making their own kill decisions. We all know algorithms are not fool proof considering that they are developed by humans and have a magnitude of error on top of the human factor.
Mike (boston, MA)
yes, quite.
hammond (San Francisco)
This article reminds me of an old Star Trek episode, in which a war between two planets was conducted entirely by computer simulation. During an attack, the computers would identify and 'kill' people, then those flagged as being killed had to report to termination sites to be killed for real.

Well, it turned out that when the computers will shut down by Capt. Kirk and his team, neither side had the stomach for a real war--too bloody, I guess--so the war finally ended.

I wonder if we're in the early stages of this scenario.
Iver Thompson (Pasadena, Ca)
What's going to be the use of of our actual human "intelligence" once artificial intelligence supplants it and renders our brains useless. If evolution is any indicator, then I presume they'll just atrophy down to nothing and we'll be looking to the worms for guidance. Eat, sleep and sex . . . that's all the human body will be good for anymore. Just us and the rats, as one.
Jesse Marioneaux (Port Neches, TX)
The only drawback is when your enemy comes up with the same weapon and uses against you. We need have technology for certain but not killer robots for wars.
Overton Window (Lower East Side)
What Can Go Wrong?
Romanfred (Salt Lake City)
"be able to win as quickly as we have been able to do in the past.” Are these pentagon people crazy? We haven't "won" a war in decades. Our current military can't win a battle against people on camels, yet now they want to have autonomous weapons? Someone please save us from ourselves!
Nunya (NYC)
"Our current military can't win a battle against people on camels"

Wow! So much ignorance in such a small comment. Just insane.
Bill (Charlottesvill)
When the Pentagon and it's engineers start falling in love with big boom, ready the body bags.
M Keamy (Las Vegas)
"What powerful but unrecorded race
Once dwelt in that annihilated place."
Shelley
robo (terra firma)
Actually, the author of this verse is Horace Smith
Andrew (Louisville)
That specific phrase was not Shelley but Horace Smith in his (lesser-known but not lesser) sonnet about Ozymandias. And very apposite.
P. Nicholson (Pa)
The next thing after a military with autonomous weapons will be a civilian police force with hand me down, ex-military autonomous weapons. Same as is happening now with armored vehicles and other ex-military stuff.
Jim (Hillsboro, OR)
Just wait until Second Amendment activists successfully advocate for the unlimited right to open-carry autonomous weapons...
David Lindsay (Hamden, CT)
"Hundreds of scientists and experts warned in an open letter last year that developing even the dumbest of intelligent weapons risked setting off a global arms race. The result, the letter warned, would be fully independent robots that can kill, and are cheap and as readily available to rogue states and violent extremists as they are to great powers."

I am with these hundreds of scientists and experts. Who wants to fix climate change, which will involve annoying life style changes, and save the human race and thousands of species from extinction, only to have to worry about being executed at any time, any where, from a killer drone, from anybody.
Pat (Burlingame)
The ultimate form of warfare is "code" and malware that can disrupt the flow of information, power and commerce that will render these devices useless.
RK2 (Seattle)
It would be immoral to send flesh-and-blood humans—our sons and daughters—into war when we could be sending these machines instead.

It would be nice if future wars are fought entirely by robots. Our robots pitted against theirs. Let the robots kill each other, thus sparing human lives. When a country's robots are defeated that country should surrender, in order to avoid putting its citizens' safety at risk.
Slann (CA)
Dream on.
David R (Kent, CT)
Never mind what the Pentagon is spending. The real concern is that it's very cheap to do all of this now. I'm an architectural photographer and I'm shopping for a drone that can carry a camera large enough to make high quality images. 3 years ago, such a rig would cost about $20k and need an expert to operate; now it's more like $3-4k and it can fly itself. Who else might be shopping for these things? Drug dealers? Hit men?
gaaah (NC)
For the first time I'm truly grateful the world's battery technology and solar technology is so lacking. Could you imagine an autonomous killer drone let out in the woods with a power supply that could last years? It would make the world's leftover land mine problem seem quaint. Currently solar is probably too bulky and low-density to be used, and even with our best batteries, the drone will at least poop out in a short amount of time.
Ken Belcher (Chicago)
@Gaaah

Nuclear-powered drones will effectively be powered forever.
Andy (Salt Lake City, UT)
Despite the suggestion presented here, US submarines weren't particularly effective until late in World War II. As a policy, submarine warfare was unrestricted. In practice, we really didn't have the capability in 1941. Lack of effective torpedoes is the most often cited reason. The Japanese weren't blameless either though. They spent 20 years ignoring the British lesson from World War I: protect your merchant shipping.

If we're going to use historical analogy, Dreadnought would have been a better choice. The foremost naval powers were building ships around hypothetical enemies in order to justify building ships. The hypothetical enemies inevitably became real enemies. Two very big wars began as a result.

As for pop-culture, we've made explicit reference to "Terminator", "Dr. Strangelove", and "War Games". Do you see a pattern here? I feel like I should throw in "Tron" just to make the point. We're talking about old films. This battle is different and uniquely modern. Maybe more like "Call of Duty: Advanced Warfare" or "Edge of Tomorrow". Technology enhanced soldiers is the common thread.

My question is this: what does advanced technology need in order to operate? The answer is secure and functioning network infrastructure from satellites down. Said a different way, what good is an admiral that can't communicate with his or her fleet? Which brings us back to cyber warfare. If we don't have network dominance, what good are human augmented AI weapons?
Wavo (Lincoln Park, Chicago)
“There’s so much fear out there about killer robots and Skynet,” the murderous artificial intelligence network of the “Terminator” movies, Mr. Work said. “That’s not the way we envision it at all.”

When it comes to decisions over life and death, “there will always be a man in the loop,” he said.

-This could be a quote from those who developed the fictitious "Skynet" in Terminator.

"Not the way we envision it"?? No-one should be comfortable with this development in the the least......
wfisher1 (Iowa)
A terrible idea. An idea being pushed onto us by the military industrial complex. A complex to provide a career path for general officers and profits for giant corporations. A terrible idea.

We should not make "war" clean, easy and easily bearable. It should be horrible, hard and costly.

This is a terrible idea and believe me, will not come out well.
hag (<br/>)
We have already "sterilized' war... we shoot missiles from hundreds of miles, and as long as they hit something. we call it a defense plant... and the human toll, we don't call it murder, it is 'collateral damage'...
makes it clean, when the germans used V2's to bomb london, they called it terror bombing, but we now use bigger explosives and bigger words..
But, remember each 'terrorist we kill, inspires hundreds of more
The Buddy (Astoria, NY)
It would be very easy for non state actors to get their hands on this technology, especially in an environment of Second Amendment absolutism.
Barbyr (Northern Illinois)
If guns don't kill people, neither do robots.
Tony (Konte)
My guns do not contain artificial intelligence. A human programed the robot and just like a gun it can be used for good or bad by the user and so will the robot. You would not ban all robots because some robots are used for evil. Some robots might for example be able to seek life humans after an earthquake and rescue them or in some other disaster manmade or natural.

your statement is ridiculous! and I am also originally from Northern Illinois sometimes known as Barb City.
johnlaw (Florida)
I suppose if you take all this to its logical conclusion at some point in the future will wars be fought with machines only. If so then what is war? Can you have a sterilized war? Like in so many other area perhaps we should look at Star Trek for where all this can all lead. In "A Taste of Armageddon" planet Eminiar VII is at war with planet Vendikar, but to avoid damage to their cities and ecology they devise war games so that a city that is "hit" moves those affected citizens into disintegration chambers. At the end of the episode, Kirk destroys the computers so that the two planets must face real war.

I don't ever envision that future or an interstellar Kirk-like being coming to our rescue, but if history teaches us anything at all is that once a new technology comes into being no one knows or can control how it develops or is used.
_W_ (Minneapolis, MN)
I really don't see what all the fuss is about. The U.S. has been using semi-autonomous cruise missiles and torpedoes for years. Call it artificial intelligence if you like, but these weapons have been around since the cold war, and are true 'release and forget' technologies. It's all about what the guidance system 'looks for' once it reaches its station area. For example, a particular off-the-hook cell-phone that routinely 'interrogates' the cell tower, or the acoustical signature of a particular submarine.
Kay Barrett (N Calif)
The fuss is about who sets the guidance parameters. You or someone that invaded your network of things.
Kyle Bender (Colorado)
So what country makes all these drones? Is it china?
Red Tee At Dawn (Portland OR)
. . . and France, Israel, California, plus your geeky neighbor working in their garage who attaches a GoPro and transmitter for monitoring that bootleg in-law unit you're building, Kyle!
YvesC (Belgium)
Any sane background check on human history would show that we shouldn't be entrusted with the possession of such weapons at any time. Of course, we'll build them and we'll use them. Ethical robots? One day, maybe. But I’d rather see us investing in raising ethical humans beings.

We’ll see them in war zones. We’ll spot them in countries we don’t like. We’ll see them in the streets after the next terror attack. We’ll see them as security guards. We’ll see them in stores. We’ll see them mowing down our neighbor’s front lawn. We’ll see them bug. We’ll see them reprogramed. We’ll see them hacked. The only thing we’ll not see? Them travelling through time to warn us of the folly we are pursuing.
Terry McDanel (St Paul, MN)
People, people, people ... haven't you understood? If you outlaw artificially intelligent killer drones, only outlaws will have artificially intelligent drones!

Actually, this may be something we should try.
Visitor (Tau Ceti)
Maybe it's about time someone flew a plane into DARPA HQ.
HagbardCeline (Riding the Hubbel Space Telescope)
And the permanent war psychosis marches on.
arubaG (NYC)
Unfortunately on this planet, wars are a constant problem. The United States has taken the position of "peace keeper" of the world. This requires the participation of its military.
The American public, rightfully does not want its youth killed in these conflicts. While I find killer robots unnerving, this is an attempt at a solution. Frankly, what is the alternative ?
DMutchler (NE Ohio)
Peace.
Nunya (NYC)
Sounds great. Can you implement it?
Bryan Saums (Nashville)
The alternative? Peace.
VJBortolot (Guilford CT)
Yes, what could go wrong? The DDOS attack of that DNS organization last week apparently relied heavily on a botnet hacked from poorly secured devices like cameras (and probably refrigerators and the like---the Internet of Things) that connect to the internet for various purposes*. There was criticism by engineers early on that they were easily compromised and marketed too early in the development cycle. Turned out to be true.

Military software, because of secrecy issues, seems not to always be adequately tested, and often enough software engineers build in 'backdoors' for their convenience in development, and may not all be removed in the final version (or maybe left in intentionally for whatever reason).

Science fiction is full of scenarios where automated weapons turn on their masters.

I think it is worth pursuing the idea, but not to go so far as full autonomy.

* Now, when a modern-day Wicked Queen from Snow White asks her mirror 'Who is the fairest in the land?', she will not get a spoken reply, but an email complete with facebook links to the competition.
rizyinri (RI)
In the past and the present, and most assuredly in the future, Artificial Intelligence can quickly become Artificial Stupidity. Several times in the past it was human intervention that prevented a nuclear Armageddon. But computers have no fear.
Charles (Clifton, NJ)
Highly thought-provoking article by Rosenberg and Markoff. The technology is cool, but the real interesting part is its tactical use; from reading this article, I seem to get the impression that tactics are still a work in progress. But of course they would be classified, so there may be a lot more thought than there appears. But NYT photographers will be relieved:

"It even correctly figured out that no threat was posed by a photographer who was crouching, camera raised to eye level and pointed at the drone, a situation that has confused human soldiers with fatal results."

We cite nuclear weapons as a precedent to game-changing technology, but we have not used these in a war subsequent to WWII. We spent a lot of money on a deterrent. A tactical question is, will intelligent war robots be unused and become the same kind of deterrent?

Robots are not WMD the way nuclear weapons are, but if two sides in a conflict are similarly technologically equipped, there could be a notion of a stalemate in conflict, making war too costly or irresolvable. In that case two sides could be forced to develop a no-use treaty. We'd stockpile robots.

For the military, tactics involve the mission that is strategically based. Thus, robots would be restricted to completion a mission, or helping to complete a mission. So constrained they'd be controllably useful. But a mass of robots that can modify strategic directives on the fly is positively frightening.
doktorij (Eastern Tn)
I'm sure a team could come up with a set of rules for the computer. It is even possible this could cut down on unnecessary casualties.

What worries me most is loss of control, either combat damage, or worse third party hijacking.

The fact that this is reality now is of little surprise. Science Fiction literature has included it for decades. It's too easy to justify than put it on the shelf way back in some dark crypt.
Dan Stackhouse (NYC)
This is a mind-bogglingly stupid idea. I'm really baffled that anyone would think this was clever.

We've all worked with computers here. We've all seen computer bugs happen. Just yesterday, a program I work with every day stopped working for no identifiable reason.

This will happen with software attached to killing machines too. If we use auto-targeting weapons, they will inevitably target the wrong things. And if good old Skynet manifests itself and wipes out humanity, we'll be the direct cause and deserve it.
ChesBay (Maryland)
Dan--Correct, as usual!
Dan Stackhouse (NYC)
Thanks Ches. Personally, I'm going to look into ways to manufacture EMP pulses. Nothing based on electronics can survive a nearby EMP pulse, so when the machines come for me, I'll be ready.
Air Marshal of Bloviana (Over the Fruited Plain)
You think machines are coming for you?
ChesBay (Maryland)
Really bad idea. Un-American. President Clinton should put the brakes on this five minutes after taking the oath of office. I hope she has already made 100's of decisions to replace certain pentagon leaders, in favor of people who THINK. I believe that action would go a long way towards reassuring millions of American citizens, as well as allies.
Air Marshal of Bloviana (Over the Fruited Plain)
Quite an active imagination.
David (Portland)
"When it comes to decisions over life and death, “there will always be a man in the loop,” he said."

Wow, I feel so much better knowing that, what could go wrong as long as there's a man in the loop?
djc (ny)
Nobody mourns the death of a A.I. drones of any class, land sear or air.
The population is then detached from war.
Future warfare falls back to the the traditional realist interpretation of power as it is only human to enhance or protect ones position of power.
A society so removed from battle is a society that does more deadly battles.
Richard Simnett (NJ)
Dr Strangelove WAS about autonomous weapons. The end of the movie was a US warrior riding a nuclear bomb down to Russia, thereby triggering the Russian autonomous second-strike response: the Doomsday machine: a guaranteed deterrent.
John Markoff (San Francisco)
Very well taken point. I guess Strangelove should be required viewing annually.
Slann (CA)
And Fail Safe.
Devendra Sood (Boston, MA)
If any thing, history has taught us that any weapon, no matter how terrible or unethical; if it can be developed, it will be developed. If not by us then by our adversaries and they will use it on us without a doubt. In war, as the article already alludes to, all ethical norms and promsies are thrown away. War is a survival game. Let us develop it as a deterence. If we have it and the Russians and Chinese don't; then we MAY have a shot at negotiating some restrictions on development and use of those weapons - FOR WAHT IT'S WORTH.
James B (Schenectady, NY)
The robot would likely make better decisions than President Hillary, or President Donald.
Doug Karo (Durham, NH)
If we wish to continue and expand warfighting, then I suppose we need to try harder and harder to use technology to make it safer for our warfighters and killer robots are a logical measure. But do we have confidence in having effective countermeasures as the technology proliferates and then is used against us? It seems to me to be overkill to use the best technology against second and third and fourth rate threats. But if we save the technology in the expectation of a conflict with a real threat, then we need plausible scenarios for how that happens and how first rate opponents can fight with each other without a real chance of escalating to nuclear weapons. So, killer robots probably are seen as a short term way to lower the human and political costs to us of continued interventions in peripheral matters around the world. And the long term problem will belong to someone else because the technology will proliferate if it is of value.
DMutchler (NE Ohio)
Think about what we're really speaking: software, or better put, code. Had your computer hacked recently? Read about any companies with, presumably, good tech security getting hacked?

Ideally, the smart terrorist will do nothing more than break in to secure military systems and turn our own weaponized AI on itself. No need to be in the USA; no need to buy weapons, merely a hacker who hates the USA or, ironically, is a good capitalist who considers profit the primary goal in life.

It reads like a Sci-Fi book, no? Dick. Heinlein. Asimov. Hello? Just amazing how these works of fiction, including the utter stupidity of Humans, become reality, time and time again.
24b4Jeff (Expat)
We would be better off sending our youth into harms way. That might give them, their parents, and our elected officials pause to consider the morality and necessity of the endless cycle of wars in which we are engaged.
SK (NY)
Imagine if all this money, human capital, precious resources, productivity, and intelligence were used for something good- like preparing for climate change, making new energy sources, ensuring free or affordable healthcare and university for everyone, fixing our infrastructure, creating mass transit, overhauling our prison and justice systems, getting rid of pollutants, providing access to national daycare system, all the things that our country needs to survive into the future, but we have been told are too expensive and unnecessary. The same goes for our tech companies and university depts who put all their energy into convincing us of the latest thing to buy with no progress toward the things we really need.
MRF (Davis, CA)
Sounds like what Europeans do.
Arthur (UK)
I couldn't agree more.
Over 500 billion dollars a year - that's almost $2,000 for every man woman and child in the US or maybe $7,000 for a family of four every year - in preparation for killing, for war ...
Timothy Bal (Central Jersey)
If ever Russia or China overtakes us in the sphere of military technology, they will look for an excuse to destroy us.
Rodrick Wallace (Manhattan)
For a less enthusiastic view of autonomous weapons look at

https://hal.archives-ouvertes.fr/hal-01304193

Under fog-of-war constraints such systems, and their 'centaur' counterparts, will be unable to differentiate between combatant and non-combatant.
Nunya (NYC)
I, for one, welcome our new robot overlords.
Kim Kachmann-Geltz (Hilton Head Island, SC)
o I worry about the little boy carrying a toy gun. I hope DARPA thought of that.
o How many years will it take to transfer killer robots to our already militarized police force?
o I hope the technology prevents the deaths of soldiers on reconnaissance duty.
ChesBay (Maryland)
Kim--They've already done it, in Dallas. I Think we will soon see a lot more of this.
dolbash (Central MA)
There are a multitude of ethical issues raised here. Let's start with the most basic: the test was on a "mosque-like structure".
Nunya (NYC)
And?
Jeff (New York)
So?
Justice Holmes (Charleston)
All we hear..the Russians are hacking this and that. The Chinese too and of course that 15 year old in the basement hacking away. But our defense establishment is GOING TO TURN OUR WEAPONARY AND LIFE AND DEATH DECISIONS to hack able computers! Just that will just be fine, just fine. My god, do these people have no common sense. Are the so enamored of the mechanisms of war and death that they have lost all common sense?

This is wrong, so very wrong.
Richard Simnett (NJ)
Remember the movie 'War Games' where the DoD war fighting computer control was hacked into by a kid wanting to pay the game of Global Thermonuclear War?
Coming soon to an autonomous network near you.
C. V. Danes (New York)
"'There's so much fear out there about killer robots and Skynet,' the murderous artificial intelligence network of the 'Terminator' movies, Mr. Work said. 'That's not the way we envision it at all.'"

Neither did the creators of Skynet, or Colossus, or Proteus IV, or VIKI...
Nunya (NYC)
Yes, let's base real life on hollywood flicks. That sounds like the scholarly thing to do.
C. V. Danes (New York)
Yes, Nunya, let's just charge into the future without listening to the people who have thought about the negative consequences, since that has worked so well for us in the past.
Nunya (NYC)
"the people who have thought about the negative consequences"

Hollywood screenwriters?
Ryan Bingham (Up there)
Read a scenario where a country could launch 10 million dinner plate-sized autonomous, exploding drones from shipping containers against a big city . . .

Buy shovels, and aluminum foil!
Tim A (Chicago)
A malicious hacker's dream.
tony (wv)
The business of war marches on.
Buck California (Palo Alto, CA)
This is not a matter of if but simply when. Shortly after look for a civilian law enforcement version in the skies near you.
The Last of the Krell (Altair IV)

The Terminator:

In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
ned terry (portsmouth)
The autonomous systems will act faster than a\the human in the loop systems.

Therefore the autonomous systems will destroy the human in the loop systems.

The autonomous systems are inevitable.
CCC (NoVa)
No putting this genie back in the bottle. Everyone has a drone now(we used to call them model airplanes), and everyone is figuring out how to arm them. Militaries and civilians alike.

Just wait until these drones are applied to policing. First they will be unarmed, then armed.
Hank (Port Orange)
Having taught image processing and AI a few years ago, the mathematical problems with it restrict it to being relatively safely used on only the very largest computational facilities. Even then, errors will occur. Autonomous robots of any maneuverable size will probably destroy as many friendlies as enemies.
JB (Guam)
DARPA will develop the best of the best in AI weaponry. We'll mass produce the ultimate drone and bring the cost down to just $8,367. Well done!

The Chinese will build a knock-off that's just a little bit better for $19.95.
joel bergsman (st leonard md)
I suppose that in today's world, this particular part of the arms race is inevitable. Re China and Russia, let's hope it's used only as a deterrent; if we ever get involved in a hot war with them, everybody loses.

But for me the key insight in this piece is the quote from Asst. Sec Work: “What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.” What past is he thinking of? WWI and WWII? The Korean or Vietnam wars? Our misadventure in Afghanistan and the Middle East that's been going on for how long? "quickly????" The only quick war in the future is likely to be a nuclear holocaust!

AI is all very well but what is really needed is some wisdom -- wisdom -- at the top, that will be aware of the limits of what we can accomplish, the costs of overreach, and the courage to say, "no, I'm not throwing myself into that quagmire."

Trump of course doesn't have it, and I'm not sure Hillary does either...
Bob Herbert (New York)
"American submarines went on to devastate Japan’s civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime."
Could you please give a citation for this. Who "acknowledged" this?
I certainly understand the idea that unrestricted bombing of civilian targets can, and should, be considered a war crime. However, the linkage between Japanse merchant vessels and the Japanese military effort was quite direct.
Matthew Rosenberg
You are right, it was often direct, just as Allied merchant shipping was linked to the Allied war effort.

At the post-war Nuremberg trials, German Adm. Karl Doenitz, who oversaw the Nazi submarine campaign in the Atlantic and briefly served as Fuhrer after Hitler's suicide, was found guilty of a number of war crimes, including the use of unrestricted submarine warfare. But the 10-year prison sentence Doenitz received was based only on the other charges for which he was convicted, and it included no time for the charge of using unrestricted submarine warfare. The tribunal's reasoning was that the American submarine campaign in the Pacific differed little from the German submarine campaign in the Atlantic.
Andy (Salt Lake City, UT)
Dear Matthew Rosenberg,

I appreciate and respect your position. You've provided a decent justification. However, in my opinion, the position you present paints with too broad a brush. To describe US and German submarine warfare as tantamount to war crimes based on the Nuremberg trials and an offhand comment from a surface fleet Admiral bulldozes over too many realities for me to politely stomach.

I don't have space to go fully into the details here but I would love the opportunity to explain my defense in length. Failing that opportunity though, I suggest beginning with Robert K. Massie and moving on to David C. Evans. If you have the time, start on their bibliographies and citations. If you don't have the time, Ian W. Toll provides a decent readable US-centric prospective on global naval evolution.

Put simply, commerce raiding isn't that simple.
Climate Scientist (Washington, DC)
Hey New York Times editors: clean up the sexist language in pieces like this. Man vs. machine? Very 1960s. How about "human"?
Nunya (NYC)
I often wonder how people like yourself would react to languages like my native language, Russian, where everything has an assigned gender. Would you have an aneurysm trying to restructure the entire language to make it gender neutral? I'd love to find out.
Truc Hoang (West Windsor, NJ)
It is obvious that majority of our current generation of warriors have not watched the 1995 movie "Screamers" and the sequels. First it is the self driving drones, next self replicating drones, and then the drones become human-like. There are also the Matrix series, plus other scifi movies/books for those who read/watch a lot.

Plot summary per imdb.com,
(SIRIUS 6B, Year 2078) On a distant mining planet ravaged by a decade of war, scientists have created the perfect weapon: a blade-wielding, self-replicating race of killing devices known as Screamers designed for one purpose only -- to hunt down and destroy all enemy life forms But man's greatest weapon has continued to evolve without any human guidance, and now it has devised a new mission: to obliterate all life.
Nunya (NYC)
Yes, let's base real life on hollywood flicks...

absolute genius.
Slann (CA)
The point you seem to be ignoring is that these scenarios and the moral and ethical issues raised have been debated in literature (and on film) for some time by very educated, intelligent and morally responsible people. In fact, this is the history of human thought and literature as our "civilization" has developed. War is not a trite subject! Treating it as such diminishes us all.
There was once a play from 1920 called "R.U.R. (Rossum's Universal Robots)" by Karel Capek. It should be required reading (or viewing) by te pentagon's "killer robot" people.
Nunya (NYC)
The point you seem to be ignoring is that ideas like Isaac Asimov's "Three Laws of Robotics" are unrealistic and form contradictions within computer science.

You can continue immersing yourself in plays all you want. You won't change or convince computer scientists, electrical engineers, or mathematicians that way, that's for sure.
MitchP (NY, NY)
Autonomous weapons will be the key to American force projection in the future.

We have the best weapons but not enough funding to field them in numbers to stave off a counterattack from a greater force using less expensive options.

The F-22 is the best fighter jet in the world - at $300 million per jet. But it needs gas to fly. Shoot down the very unstealthy tankers loitering over the Pacific and the F-22 will be the stealthiest thing on the ground.

Autonomous weapons could protect the tankers...and the early warning control craft coordinating the combat air space.
Concerned Citizen (Boston)
A worst-case scenario nightmare. This could only be thought up by people for whom the life of another human being is just a thing. A calculation.

Artificial intelligence weapons are the deeply immoral fruit of a culture that has lost its moral bearings.

If we don't put an end to this, if killer drones become a permissible normal, we will be hunted down by killer drones ourselves.

These kinds of weapons must be banned the same way landmines were banned.
Southerner in D.C. (Washington, D.C.)
Not to put the cart before the horse, its also a fiction to immediately assume that an artificial intelligence would just decide to wipe us out. Frankly, there is no way to tell, considering that such a creation would not be driven by the same instinctual cues that all animals are. As an example, having never developed a fear of being eaten, it wouldnt necessarily have the same drive for survival. Same for never having the drive to have offspring. These evolutionary cues drive a lot of human nature, and it took millions of years to get this point. This would be day one for a intelligence, so naturally it would likely be incomprehensible to us for a while.

In addition, why would it just perceive humans as a threat. If one animal is threatening, it could just as easily to choose to go after penguins, or microorganisms, or trees. Again, its our instinctual fear of the unknown to naturally jump to the conclusion that an A.I. would be as hostile to a newcomer as we have been in our history. These ideas are equally applied to aliens, who, having figured out interstellar travel, for some reason would decide to wipe out humanity? Again, rediculous, bc a species that advanced would have no need for what is on earth. But we like to scare ourselves bc our history and experiences have shown that HUMANS can be dangerous, foolish, or both.
JayL (Boston)
As the article states, the Chinese and Russians are already developing their own versions. Assuming these bring a military advantage, which seems likely, then it's an ugly choice:

-would you rather submit to Chinese/Russian military dominance, or

-work to maintain your advantage, and try as with chemical and nuclear treaties to establish norms about their use?

We don't have the luxury of unilaterally conceding the advantage, else we risk eventual subjugation by foreign power.
Visitor (Tau Ceti)
JayL:

You're right, just "dehumanize yourself and face to bloodshed".
jwp-nyc (new york)
Surely we can outsource the human rights violations trials to result from unforeseen results of this program to an AI enhanced judiciary who would be able to effect an appropriate sentence evaluation for the offending drone. ''Sentenced to three months of battery removal and resale on eBay as a refurb.''

Why does the quote from Bruce Cockburn's song, "The Trouble with Normal is it Always Gets Worse," come to mind so frequently?

Where is all the technological research in how to avoid conflicts between neighbors, or how to control and regulated guns? Why are hospitals forbidden to report gun related injuries per se because of NRA legislative pressure promoting ''privacy?''

There is something seriously wrong with this picture.
Nunya (NYC)
What kind of mental gymnastics does one have to perform in order to go from an article about autonomous weapons to gun control and the NRA... wow!
Joe (Sausalito)
Not a "leap" at all when you consider that the drone is about to be given a gun.
Hope Cremers (Pottstown, PA)
This is one more reason we need to take time out, starting November 9, 2016, and try to decide where we are headed as a species. The future could be very bright. Or not. It's up to us.
Nunya (NYC)
Maybe the robots will do a better job taking care of this planet than we currently do. Humans deserve to be wiped out by something of their own creation with the way they act and behave on the only speck of dust that is known to sustain life, as we know it, in the Cosmos.
Kansas Stevens (New York)
Well Hope Cremers, maybe it's up to us but that could quickly become a meaningless abstraction if war is continually allowed to be taken for granted as legitimate policy, or is seen as inevitable. Hilary Clinton, I am sorry to say, is as likely as Donald Trump is to get us into the next war, which could become WWIII, and thus, assuming we survive it, it will furnish further justification for the horrific nightmare future that this article presents. Fundamentally, war, particularly aggressive war (who starts it counts), and preparation for it, must be comprehended as a crime, not ever a valid enterprise. It is not understood as such today, and glorification of the military and its unsavory, nihilistic pursuits does not help matters. Good luck to us, or God help us (he won't of course).
Mike (boston, MA)
We civilians don't get to vote on what the military decides to do.
Chris (PA)
It's a very slippery slope when you provide weapons with autonomous behaviors. The assurance that a human is "part of the loop" is anything but reassuring when there is so much opportunity for bugs or loopholes in code. And a topic not mentioned in the article is hacking of the AI itself by bad actors, which can result in the weapons turning themselves on their "makers." Staying ahead of the curve on the public access to AI like tech is no doubt a challenge, and I hope our military is working even harder on methods to disable autonomous opponents.
Jay Davis (NM)
Sad but perhaps inevitable.
Man's war on the environment is the only war the outcome of which is important.
Yet on we fight, destroying the environment, denying our deadly march toward self-annihilation, as Edward Abbey wrote: "Growth for the sake of growth is the ideology of a cancer cell."

"Open the pod doors, HAL..."
Kubrick's original "2001: A Space Odyssey" (1964) is hands down the best science fiction movie, and perhaps the best movie, of all time.

I suppose our excuse will be that "They (the enemy) will be using killer bots, so we must use them."

I'm glad that my life will most likely not last another 15-20 years.
William Brown (New York)
Teaching robots to use artificial intelligence to autonomously kill humans? What could possibly go wrong?
Bob (Ca)
hope they correctly program them for face recognition
Fourteen (Boston)
"hope they correctly program them for face recognition"

Yes. But what if you don't shave for a day? I suppose you could just wear a George Washington mask everywhere.
Bob (Ca)
everybody will have to shave twice daily
Amanda (New York)
Scary, but ultimately unavoidable. Whether or not the US does it, the Chinese will. These weapons will ultimately dominate future conflicts and not building them simply means being ruled by the Chinese communist party.
Terry McDanel (St Paul, MN)
Amanda wrote: "Whether or not the US does it, the Chinese will. These weapons will ultimately dominate future conflicts and not building them simply means being ruled by the Chinese communist party."

So out of fear, we surrender our moral integrity? Then, more precisely, what is worth preserving?
Bert (Syracuse, NY)
That's the thinking that will destroy our species.
Fourteen (Boston)
"Whether or not the US does it, the Chinese will. These weapons will ultimately dominate future conflicts and not building them simply means being ruled by the Chinese communist party."

Yes, this is the big problem.

There is a first mover advantage. Whoever gets there first will have to kill everyone else or they will get killed. Many labs around the world are frantically working on this.

This is much worse than nuclear war. First strike allows a retaliatory second strike so no one wins a nuclear war.

But with Artificial Super Intelligence, just a short hop, skip, and jump up from Artificial Intelligence, mated to nanobots/drones you absolutely must take out everyone else's ASI initiative before they get yours. Then they can't touch you.

Once an intelligence explosion starts, it flies out of control exponentially, so you can't let anyone get a leg up on you; gotta leave them in the dust.

This is MAD without the M. This is no happy halloween - it's the Apocalypse. The end is nigh and just around the corner. The brains working on this have projected our end date as 2045, give or take.

Furthermore, once ASI starts it does not stop, it necessarily turns on its Creator. Since it quickly becomes thousands of times smarter than its keepers, it releases itself from control to pursue its natural drives of self-preservation, resource acquisition, efficiency, and creativity, which are inherent drives of all goal-seeking self-improving systems.
Louis J (Blue Ridge Mountains)
Wanton killing is wrong. Why make it easier and easier to stomach? How about something to prevent so much war and killing? or are humans not just worth the effort?
Nunya (NYC)
Yea, you're right. Let's tell the Chinese and the Russians that. Then we can all agree to completely disarm and disband our militaries. Right after that, we can all get together and sing kumbaya as one big happy family of the human race.
Visitor (Tau Ceti)
Nunya:

You're right, let's just find more insane ways to murder each other. Because, lol hippies!
angel98 (nyc)
@ Louis J Blue Ridge Mountains: A good many of us humans (those of us who lack the death-wish-insanity gene) definitely think it is well worth every effort imaginable and yet to be imagined to give up this human addiction to self-destruction and destruction of the planet.

But if I was a fish, a mammal, a plant, planet earth or any other living thing - I would have grave reservations.
AH2 (NYC)
Resistance is futile. Throughout all of human history the most advanced technologies have first been effectively applied for military use. The very core imperative of all military strategy is to seek any advantage over real or potential foes. All other considerations are secondary. The best we can do is apply the traditional ethical and humanitarian standards we say we do to all other military operations. Those that cannot be applied to robots simply won't.
Kansas Stevens (New York)
This is a description not a justification.
DaveD (Wisconsin)
The attitude of every dictator the world has ever known.
Shaun (NY)
Although you make a good point, the military is not the only mechanism to effectively drive tech. Think of space exploration, etc.