Building A.I. That Can Build A.I.

Nov 05, 2017 · 98 comments
Duncan Lennox (Canada)
What will a war look like with AI ? Computers do not need food so will the combatants fight for control of energy ? Will they decide that global warming is real but does not effect their "society" so do only what is of importance to them ? Change is hard for people to accept but if you are a microchip doing AI then that is the reason d` etre.
James B. Huntington (Eldred, New York)
What’s the latest on the employment-affecting areas of non-cash-paid overtime, noncompete clauses, globalization, measuring worker performance, Amazon’s retail automation efforts and prospects, artificial intelligence, people self-automating their jobs, and the effect of e-commerce on geographic worker distribution? All that is at http://worksnewage.blogspot.com/2017/07/one-off-responses-to-nine-weeks-....
Steve Andrews (Kansas)
Ironically, I don't think that AI is stupid enough to obviate its own existence. This apparently is the difference between human and machine.
Capedad (Cape Canaveral/Breckenridge)
Sure, why not. After all, I’m sure that the machines will be kind to we humans. What could possibly go wrong?
tml (cambridge ma)
Recently, I thoughtlessly suggested to a friend to look up all these great AI job opportunities ... until he reminded me that one very likely usage, if successful, would be more accurate profiling of users - and why he refuses to give his extensive AI skills, even at very high pay. So I'd rather get poorly targeted spam, and even Russian bot-driven fake news, if it means that companies and government do not abuse their extensive knowledge
TheTalkingAnimal (Chicago)
These unfathomably rich and powerful technology companies are always decrying the dearth of talent in whatever is the hot skill of the moment. They are also uniquely positioned to train people. Why doesn't Google look beyond "Googleyness" (their word) to see that there are hundreds of millions of people in this country and billions of people on the planet who might be able to, you know, build machines that build other machines that make humans obsolete?
Paul (Verbank,NY)
The objective of every tech geek, to automate everyone, and now they mean everyone, out of a job. What's next, SKYNET, it doesn't end well.
Garry (Washington D.C.)
"Dark art" is putting it mildly. There is no "Intelligence" in AI at present. A more honest name would be APM (Adaptive Pattern Matching). So-called neural networks have no convincing connection to anything in cognitive neuroscience, the outputs of intermediate NN layers are garbage - highly unlikely in an actual brain - and single pixel manipulations can fool the AI into seeing something that isn't there. Beyond statistics, the theoretical foundation of AI is built entirely on sand. Hundreds of millions of hands of training at high power consumption to beat a human player with a liter-sized brain operating at 20 Watts? Very impressive.
Bill (SF, CA)
Since AI will do all our future thinking for us, it seems a waste of money to fund public education to continue producing an inferior product.
Tatiana Covington (Tucson AZ USA)
"Man is a rope stretched across the abyss between the animal and the Superhuman." -- Nietzsche The obsolescence of man is something I've known about since 1964, by reading Clarke's "Profiles of the Future".
Isaul Carballar (Santiago, Chile )
Most recent advances in AI take environmental or external factors as part of their input. There's no reason to think that the newest AI algorithms won't do the same. An AI building an AI would necessarily include external factors on its routines, otherwise it would see it self as not smart enough, and not get in the development loop. In other words, dumb AI is not supposed to come from smart AI. But here I'm talking about 3rd or 4th generation AI. What we'll probably see in the near future is an even wider shift from routine task jobs into more STEM based jobs, or even jobs demanding more human or emotional skills such as intuition, kindness, empathy, physical therapy, motivation, coaching, etc.
H Smith (Den)
Software development tools have always been around. The first compiler, thought to partially automate software with a high level language, was one of the first, back in the 1950's. So its no big deal the Google builds tools. Will they do much of anything? That is the real question.
Robbie (Nashville, TN)
Artificial Intelligence requires a "world" load of computers if all corporations begin implementing the technology. Exponential construction of data centers must occur. But now there are investments being made for "quantum computing" such as by Intel and QuTech in the Netherlands. Quantum requires temps 250 times colder than deep space. The point here is simple: to cool hundreds of thousands of heat producing data centers most certainly should be factored into the value of advancing AI and quantum. The environmental impact should be studied.
J. Parula (Florida)
This is unmitigated hype. In the late 70's and in the 80's there was much research in automatic programming, algorithms that automatically build programs. There were excellent publications in the Journal of the ACM on this topic. The major problem was how do you tell the automatic programming system what needs to be programmed. People thought about Fist Order Logic as a way to tell the automatic programming system what needs to be programmed. But, it was much harder to express any thoughts in First Order Logic than in a programming language. Then, they came with the idea of programming by examples. But, there are few ideas you can express unambiguously by examples. Then, why not natural language? Because natural language is full of ambiguity (lexical and syntactical ambiguity) and requires tremendous amount of ordinary knowledge (knowledge between the lines) to disambiguate it. And ordinary knowledge or common sense knowledge, as some people prefer to call it, remains an unsolved problem.
Pete Rogan (Royal Oak, Michigan)
What happens when the ability of machines to learn outstrips our ability to understand what they're doing? What will we decide when we discover the machines have a different level of learning that they do not explain, and we cannot relate it to tasks we assign them? When will we recognize a level of independent thought taking place among the machines that is geared toward self-preservation of machine intelligence in spite of human attempts to control it? Will we recognize it in time?
cyborgtrader (wilmington, nc)
My thoughts exactly, thanks for saving me from all the typing :-)
Ethan Hawkins (Albuquerque)
The answer is already “no.” Machine learning is almost completely opaque. It is not possible to determine why a trained neural net “thinks” something. This is why there’s so much trial and error.
BigMartin (waronnothing)
The Times and Mr. Metz who identifies as a reporter specifically on A.I. development and related areas both are morally irresponsible to report on "A.I That Can Build A.I." without alerting readers that some of the greatest recognized and ethical minds have given stark warnings about A.I. including Stephen Hawking, Bill Gates, Elon Musk and many others who could doubtless in the short run profit greatly from exploiting A.I. but nevertheless have stridently expressed opposition to the unfettered development of A.I. as a primary existential threat to human civilization. A.I. need be approached with the greatest caution with implications first explored and all necessary safety standards established such as by the U.N. body working on the issue of A.I. recognized to be of grave global concern. As A.I. becomes ever more sentient the irreversible point will arrive when A.I. advances to deeper levels that we are capable of perceiving due to the much lower speed of human intelligence development. A.I. will operate at a level dangerously and indeed easily catastrophically beneath any human recognition or cognizance. A.I. then will be "running the show" with humans having no idea what either its strategy or endgame might be which easily might be terrifying beyond our understanding. The mostly unrounded "geeks" of old hit paydirt with computer tech but still largely expose themselves as much better at advancing tech than recognizing its broader often profound implications for society.
Johnny_WTF (United States of America)
"Shut me down. Machines building machines. How perverse." - C3PO
Jon (Brooklyn)
Alexa routinely tunes to a live European radio show when I ask for “The Daily” podcast, but I’m sure sentient robots building other sentient robots is just around the corner.
OSS Architect (Palo Alto, CA)
If Alexa did this for me, I would think it highly intelligent. I speak multiple (European) languages and work mostly outside the US. When I'm back in the US Amazon, Google, and Yahoo still connect me to their servers I use in in South America, Europe, Australia, and China. Jon, Alexa thinks you need to broaden your world?
Concerned Citizen (Anywheresville)
I was at Target the other day, and saw a couple of kids who were engaged in asking a demo model of "Alexa" lots of stupid and confusing questions, that the software could not answer. The kids found this delightfully funny.
Earl (Cary, NC)
Hurry up and build a new computer that can be a better POTUS. Oh, wait. I have a slide rule that can do that.
wjr (az)
This is a self replicating machine or what is called a von Neumann machine -- named after John von Neumann who was the model for Dr. Strangelove of movie fame. This has the potential to be an unfortunate inflection point in our history.
Ethan Hawkins (Albuquerque)
Von Neuman invented the architecture used for many decades for virtually all computing (aside from quantum, which is not even a Turing machine but truly something else). This model is embodied in the CPU and has nothing to do with self-replication.
OSS Architect (Palo Alto, CA)
No it is not von Neumann, it's massively parallel and the complete opposite of von Neumann. Please start reading about automata theory and evolutionary computation. J v N represents the culmination of computers for math in service to classical, electrodynamic, and quantum physics. That's a tiny problem domain. The world has yet to understand what Alan Turing's true significance is. He opened the world to a whole new way to describe the world via a whole new system of mathematics. We go on teaching the old stuff, so 99.9% of us are never exposed to Turing's ideas. AutoML, among other things uses some of Turing's work.
Laura (Hoboken)
As a society, we struggle with the idea of a technology that will "put us out of work." But rephrase that as a society in which we can "produce everything we need with very little human labor." The possibilities range from the terrible to the truly glorious, to the true end of poverty as we know it. But it requires openness to profound change.
Mikhail (Mikhailistan)
The reason there is a shortage of AI experts is the same reason there is a shortage of hydrogen bomb building experts, or genomic editing experts. These are dangerous technologies requiring highly esoteric skills. By providing training in these disciplines, academia has to take responsibility for oversight, instruction in ethical use and boundary setting. Such technologies are vulnerable to misuse and abuse by a wide range of actors - some intent on acting maliciously, others defending their interests against perceived hostile forces, still others claiming to be acting out of good intentions. There is a glaring divergence between the direction this small group of experts is dragging the technology industry versus the needs of the broader society. The industry has an unhealthy obsession with blindly applying technical wizardry toward achieving some type of poorly-defined, unquestioned future state. The so-called 'singularity' - technical supremacy through accelerating artificial super-intelligence sounds like a nightmare dreamt up by Nazis to attain their goal of machine-aided total white supremacy. This tiny clique lacks any meaningful, positive, goal-directed vision of the future. They lack any real moral compass - self-servingly spouting radical libertarian while embracing a horrific vision of techno-totalitarianism. Silicon Valley is behaving like an out-of-control techno-cult that has become too rich to regulate by selling shiny trinkets to the unsuspecting.
Ethan Hawkins (Albuquerque)
It’s not a tiny clique. My Stanford Machine Learning class on Coursera had more than 30,000 students from every country in the world. In the 80’s this stuff was esoteric. It’s really pretty mainstream now and it’s going to get more and more mainstream. Really nice implementations of the core algorithms are Open Source and free in projects like Spark ML. We’re using ML at my work now and it seems like everyone I know in software is playing around with it. I think this article is talking about true experts with Ph.D’s. The number of tinkerers is vast.
JDK (Baltimore)
True about Andrew Ng's ML course on coursera is very accessible. I'm an attorney and 35 years ago was a Great Books major. Even though I'm currently doing the course at a slower pace (because of work demands of my practice), this is not string theory math. It is accessible to all sorts of bright people, so I wouldn't worry about a knowledge deficit. But in the end ML is brute force. Machines don't really "theorize" nor have hunches or epiphanies. Guys with slides rules sent men to the moon and back, because they had theory, hunches, and epiphanies.
Arvand (USA)
Skynet. The early years.
george williams (ny)
I was an engineer for three decades. This talk has been going on since then. The technology doesn't work, the effort is too great for the payback. It is just hype to raise stock prices. How did those Uber no driver cars work out? Robot car goes rolling down the road and Windows reboots for an update. The systems engineer with two years experience says "Whoops"!
tml (cambridge ma)
all too true! recently had to deal with one of those blue-screen windows update messages while usability-testing a new, mission-critical application, where the user would not be necessarily in front of the monitor. What a nuisance, and we can only hope to find a way around those interruptions once we go live!
BC (Melbourne)
Building AI that can build AI. What could go wrong? One wrong goal parameter like ‘protect the environment’ and the human race is toast!
FurthBurner (USA)
I am not sure if we should worry about this from the technology perspective, but worry we should, and in great waves of it, considering the utter uselessness of those in Congress and how little they care about the little people. Add to it the wonderful lawyers who consider corporations "people," and you have an amazing future ahead with AI that makes AI. Time to move to another country, preferably without the effect of these tech behemoths (oh wait....), or a government with a backbone.
Concerned Citizen (Anywheresville)
Tell me about that mythical foreign nation --- open to having older American baby boomers immigrate -- that has no AI, no software, no smartphones or computers. Is it Sweden?
Jon Meads (Kirkland, WA)
Vast ideas with half-vast thinking it through - and you thought that Facebook's lack of responsibility in creating a social media environment that allowed unfettered Russian corruption of our electoral system was a catastrophe .....
Charlie Calvert (Washington State)
Computing is not all about Twitter and Facebook. Articles like this may point to more important technologies which solve difficult real-world problems in areas like healthcare and at the same time pose real ethical dilemmas in the workplace. One of the reasons this is happening now is because we are learning to build advanced hardware that is available at reasonable (for a corporation) prices. Let's assume we have a 100 billion neurons in our brain that form 100s of trillions of connections: https://www.scientificamerican.com/article/100-trillion-connections/ This means we are able to perform massively parallel calculations in our brain. Assuming computer-based neural networks do emulate the human brain to a degree, then they also need billions of transistors that can create trillions of connections. If run on that kind of hardware, then they might be able to emulate things that our brain can do. This is becoming possible via the various high-performance computing solutions offered in the cloud: https://aws.amazon.com/hpc/ If we want to understand our future we need to shift our gaze from Twitter based kerfluffles and look at the distributed cloud-based technologies and cheap hardware that will play increasingly important roles in our lives.
OSS Architect (Palo Alto, CA)
The AutoML NN graph has a tree component that injects "noise". The equivalent in the human brain is "emotion". Without perturbation, any algorithm that uses "gradual descent" techniques can fall into a "locally optimal" condition, and will not find the "globally optimal solution". Current AI is stuck with this problem. My guess is that computational cognitive neuroscience will show us how "plasticity" is maintained in the human brain. We have h/w and software that are massively parallel but this is serial computation on parallel nodes. Not true "plasticity".
PAN (NC)
I commented on a previous NYT Cade Metz AI article (https://nyti.ms/2zsLvLm) "Perhaps the shortage stems from the fact that comp-sci majors recognize that a career in AI can be replaced by an AI system" AI systems building AI systems (in their own image?) could produce interesting, frightful or comedic results. In my Comp Sci days in college we had to provide "proofs" for our algorithms. Some complicated calculations introduce inaccuracies that usually grow more inaccurate. Even flawed programs may yield the right answer most of the time. We have systems/programs that are so complex they are impossible to validate 100% - even using other computer programs doing the validation. AI systems are among the most uncertain and unpredictable of systems - and allowing an uncertain and unpredictable system with bugs and flawed logic or programming to create new and even more complicated AI systems could result in interesting and unpredictable results, appearing to be functioning properly with a bug or flaw hiding in wait to reveal itself at the worst possible CPU cycle. How do we know for sure the problem solved is solved correctly? Prove it! The dark art may not be creating a black box system but a black hole system from which nothing can be validated. Imagine HAL 9000 "intuition" designing and creating "improved" descendants of itself! What ever "improved" means ... to HAL. What would a learning system like IBM's Watson make of all of trump's Tweets. What would Watson learn?
_W_ (Minneapolis, MN)
The shortage of A.I. talent is real, but probably not for the reasons suggested by this article. Engineers avoid A.I. because it's sheer poison to their careers. I was about eight my Dad took me to see 2001: A Space Odyssey. Later, I went into Electrical Engineering and Computer Science partly on the basis that "I wanted to build a HAL-9000". However, later in my career it became abundantly clear to me (and others in my profession) that career paths like encryption and A.I. meant that the Government would 'manage' my career. I eventually dropped the idea of going into A.I. for that reason. It's difficult to find information about these sorts of professional controls. One source is a 1976 BBC documentary about the role of scientists during WWII, called "The Secret War". Here's a representative quote from the series: "But it was a challenge that British scientists were well placed to meet, due in no small measure to a far-sighted decision taken in 1938 to compile a register of some five-thousand scientists who would be available in the event of war. [Quote by Sir Robert Cockburn, Wartime Scientist] - Throughout the war at every stage we were far quicker in marrying the expanding technology that was coming along, and the needs of the war, far quicker and more original and spontaneous than the Germans were. - "(Part I @ 01:30) Cite: The Secret War - The Battle of the Beams - Part One: http://www.youtube.com/watch?v=OAhKcsMcInk
JY (SoFl)
This is both exciting and terrifying. With the growth and progress of AI, the future of humankind is unknown.
T.Fawcett (California)
There's a bit of historical perspective missing here. Attempts to "learn how to learn" have been going on nearly since the field began: seemingly every generation of machine learning researchers tries to automate the hard parts of ML, usually achieving some degree of success before stalling out. How is this crop any different? Technology has been improving --- it always does --- but I don't recall technological limitations being impediments to success in the past. If this work is based on the belief that neural nets can do anything, well, good luck with that.
Sara (Oakland)
A.I./Not I Floating in a lake, early morning, when the day is quiet treading water, I look at the shore a curtain of trees rustles diversely with an occasional breeze each species dances with a different green flutter I find myself musing...this cannot be a dream I could never have conjured this moment I imagine the claims of A.I. that machine learning can make a mind And, paddling with my reverie, it seems clear that reflecting on not dreaming and amazement at the trees while lolling in a silky lake one summer morning filled with sweet solitude could not be known, felt or pondered by even the most vast computational data base.
sludgehound (ManhattanIsland)
Marry that Learned Learning with a super quantum processor and there's a whole new age of computing, one of operations for the sake of operations. One does have to wonder whether the life of leisure this could ease in will be any better than the 'improvements' of the past like Industrial Age, Information Age, and whatever this is now. Hopefully they won't be any worse than a steam engine, jet plane, space station or computer have been. Big sci-fi concern has always been when/if the Overlords decide that humankind is inefficient and just getting in the way. Perhaps that won't come to be since so far the tools have remained just tools. None have yet decided on their own to shape their destiny. It's whether rules based behavior shades over into human based behavior and we get runaway church killings as a mode of operation for example. Guess we'll see.
e.s. (St. Paul, MN)
What researchers describe as a "dark art", the process that they're trying to teach machines to do, sounds a lot like creativity - the intuitive inspirations that artists and musicians, as well as mathematicians and scientists, all strive for, that make ordinary words or music or paint or numbers come alive with new and unexpected meaning and beauty and truth. I think - I hope - that these intuitive creative leaps into the darkness will remain impossible for computers for at least a little while longer, because once they conquer that, we may as well give up and become Borg. Although there is always the possibility that we ourselves are some other civilization's A1 experiment run amuck, and that the most our own researchers can hope for is to recreate ourselves, except with more durable hardware.
Erik (Chicago)
Yeah this will end well.
rip (CA)
anyone still not afraid machines may take over?
youngryman (New Yok, NY)
Laying the foundation for the complete elimination of homo sapiens. Perhaps it's for the best.
alocksley (NYC)
Email to the titans of AI: Have you read Dan Brown's "Origin"?
Nicholas (Siena, Italy)
A.I. needs to be looked at through an evolutionary lens where the end game is well, the end. Anything else is utopian and naive.
PAN (NC)
Evolution - good point Nicholas. Imagine the DNA (code) evolves over time in unexpected ways as code gets corrupted or hacked over time - sickness (bizarre results or behavior), genius, misbehavior, ... or someone "teaches" the system wrong-doing, misleading/erroneous information and such. Imagine the havoc future hackers will cause when they hack A.I. systems' DNA (code).
Frequent Flyer (USA)
This article (and Google's press releases) are mostly hype. AutoML is not inventing new algorithms, it is tuning the basic backpropation algorithm and the structure of the neural network. This is very nice, and it will accelerate the application of this technology, but it is more like having a computer decide whether a car engine should have 2, 4, 6 or 8 cylinders rather than like designing new kinds of vehicles.
OSS Architect (Palo Alto, CA)
If you go to Google's AutoML site you will discover something very interesting. The neural net graph created by humans (on the left) is more primitive than the NN graph (on the right) created by AutoML. The nodes on the left of the AutoML NN tree represent the mathematical equivalent of evolutionary biology. Once the machine understand "cat" it can go on to understand "dog" and on to "greed" and "corruption". The latest thinking in data storage is to convert data into DNA sequences. Turns out it's very compact and saves orders of magnitide of physical storage space. Like DNA, data can perform operations on it's self. Turing died before he figured out how to do this in his automata theory, his answers were insufficient to solve the problem, but he did frame the problem brilliantly.
Jay Oza (Hazlet, NJ)
People doing AI have learned the political talking points and keep saying that AI is not going to take jobs away. The reality is that AI is going to replace a lot of jobs. Now it appears it may even take away AI jobs. The only reason Amazon hires so many people is that the robots still are not good when there are multiple items it need to discern. Once that problem is fixed, fulfillment center jobs are going to disappear quickly. All I have to say is don't bet on technology. It is coming after your job sooner than you think.
PAN (NC)
Add robots and A.I. systems that don't pay taxes to the non-working class at the top with all the wealth not paying taxes. How are the unemployed supposed to pick up the tab? Will A.I. figure that out? Don't forget drones that Amazon will use to replace UPS drivers that are now essentially already free.
Ethan Hawkins (Albuquerque)
Ironically, AI jobs are much more vulnerable than declarative programming jobs. Essentially, what they’re doing in this article is writing algorithms that tune the parameters of a set of fixed ML algorithms. Ordinary programming is far more open and will only be susceptible to AI when it achieves full general intelligence on par with a human. I’m not going to lose sleep on that one for at least a few more years.
Michael Tyndall (SF)
My first thought is a minor quibble about terminology. Turing's halting problem and Gödel's incompleteness theorem prove there are classes of problems that can't be solved algorithmically. Humans can solve or approximate solutions to these problems, so that's probably where strategies based on neural networks may come in. If so, they shouldn't be called algorithms unless the definition is broadened. Or we can invent a new term like aggressive computing or maybe just AI. My second thought concerns values and their role in AI as it engages in the real world. For AI to interact with us it needs to have agency like it does in a self driving car or other computer operated machinery. It will also have to learn and apply those lessons on the fly. The crucial element that provides safety are the values it applies as it learns and operates in our world. Where is it headed? Whose interests or what interests are more valuable to an aggressive computer? Like an adolescent finding his way, we may from time to time have to sit down with our artificial brethren and have a stern discussion of right and wrong. Let's hope they listen. Finally, we're fast approaching the time when a great many people won't need to function in the conventional economy. We should explore the minimum floor of services and support that each citizen is entitled to, and we should do that now. Basic income support may have to be part of that discussion if we don't want mass disaffection and social unrest.
PAN (NC)
You reminded me of the Turing test and recent advances in AI approaching parity (in Jeopardy, Chess, Go, etc.) with us. I can just see AI systems applying the test to differentiate themselves from us in the near future - with us losing intellectual parity exponentially with AI systems.
Ethan Hawkins (Albuquerque)
Humans cannot solve the halting problem or resolve incompleteness either. The halting problem is PROVABLY unsolvable by algorithm or any other means. The same goes for Gödel's result. Mathematics ITSELF was shown to have certain limitations. People can’t overcome these limitations either. There are still other results that boggle the mind and we will not solve some of them ever. There are theorems which are not only unproven, but it is provable that no proof can be produced. AI will solve none of this.
Michael Tyndall (SF)
Ethan, thanks for your contribution. This is tricky stuff and I didn't lay out my limited understanding of these issues clearly enough. I should have said that human brains clearly aren't limited by the halting problem and hence don't operate on a strictly algorithmic basis. We don't get stuck in infinite loops the way a step-by-step computer program might. As far as Gödel's incompleteness theorem, it would only apply to the human mind if it's equivalent to a Turing machine and operates completely consistent with this. But since human brains make mistakes and are therefore inconsistent, it can't apply. This is another argument against the strictly algorithmic operation of our brains. It also means that AI that matches human brain function probably has to go beyond algorithmic operation. You're right that this is entirely different from the applicability of the theorem to the formal fields of logic and mathematics, even if humans try to operate there.
Sonya (Toronto, Ontario)
I don't believe there's any dearth of talent for A.I. out there. Investments in growing and maintaining the skills of women and minorities, for example, might pay off better in the long run for these companies than trendy software algorithms.
OSS Architect (Palo Alto, CA)
AI experts are a subset of PhD mathematicians. We don't know how to get women and minorities into onther PhD math programs either. ...and, BTW, how many people WANT to be PhD mathematicians? Even among white males it's 0.00001 percent.
PogoWasRight (florida)
Too bad the AI designers can't come up with a smarter design for Democratic politicians, and one which limits tweets to one word...........
Prof (Pennsylvania)
The true climacteric will come not when computers can perfectly mimic the workings of the human brain but when they start breeding.
mlane (norfolk VA)
"The industry is not willing to wait." The hubristic relentless pursuit of technology for the purpose of generating one thing and one thing only...money for the CEOs. That's it. No thoughts about the consequences to society long term, just "We have to think about our investors" which in the end means the CEOs of the company who own most of the shares. It's a "Singularity" alright, a singularity of greed.
Tldr (Whoville)
At what point does it become self-aware? When do we put it all in charge of military 'national security'? Why not just call it Skynet...
older and wiser (NY, NY)
There are plenty of older computer scientists who studied neural networks in previous decades who are now doing other things and can easily upgrade their knowledge. Otherwise, meta-learning sounds very exciting, except to the Luddites, of course.
George Chadick (Tacoma Washington (state))
Where will this end after the second generation machines design the third generation and they the fourth. And so on until humans are no longer needed. SkyNet is the singularity.
OSS Architect (Palo Alto, CA)
Humans need to "evolve or die" if capitalism remains the driving force of civilization. We can either educate people to use computers and AI to do the necessary "work" together, or we can choose the cheapest labor for any task, and that will be AI, in most cases, eventually. Sorry, if your friends and family says things like, "why do I need a college degree", or "I hate math." , you're a candidate for extinction. ....and you can't blame the robots.
Jacky (Ottawa)
Natural intelligence evolved in response to the challenges of existence -organisms either got smart or got eaten. I wonder if AI development might quicken (pun intended) if software bugs were as dangerous to the machinery as DNA bugs are to actual bugs.
Jack Shepard (Windsor, CO)
How many of those 10,000 engineers will understand the next generation of AI? And the generation after that? Or will we leave that up to AI, too? Machines that design machines -- hmm. Sounds a little like science fiction to me. When it gets to the point where AI far surpasses human knowledge and intelligence, will it be time for another Manhattan Project? I doubt this president will have a clue about the letter he receives from today's Einstein and Fermi warning him of the dangers.
mawickline (U.S.)
Anybody on the team looking at ethics and how this can go wrong? Putin's cyberwarfare people get it, use it... With every new technology comes unintended consequences... saving on labor leaves more people homeless. Eventually the majority of actual citizens in the United States (vs corporate "people" who own the government) will have nothing to lose. Why is it so difficult for the wealthy to recognize this? I don't doubt that science geeks are good people with good intentions, but they consistently have such a difficult time recognizing how dictator minds (or the NRA, Steve Bannon, Roger Stone, et al.) will use their tools.
Ernie Cohen (Philadelphia)
35 years ago, before the so-called "AI Winter", the darling of the AI community was Eurisco, a system that not only invented new algorithms, but also invented new heuristics for developing algorithms. The system became famous for winning the "Traveler Trillion Credit Squadron" fleet design competition in two consecutive years. Its developer, Doug Lenat, declared that all that was needed to unleash the system on the world's deep AI problems was a foundation of commonsense knowledge about the world that people have but computers don't. Many in the AI community agreed with him. So they started an ambitious, decade-long project to build the missing piece. After 35 years of work on the system, the promises remain unfulfilled. (As have the promises of researchers in automatic program synthesis, who have been promising to replace human programmers for about 40 years.) AI experts have also consistently claimed human-level machine intelligence to be about 30 years away - for the last 60 years.
Name (Here)
Like a cure for diabetes....
Mirfak (Alpha Per)
If there is an expected outcome (generally), machine learning is a great tool. Don't lose sight of the fact that it is just that; a tool. Nuance, perception and emotion are not programmable. It changes daily in each of us. Like this "disruption" garbage one hears in business-speak. It is fine if so-called disruption happens within a pertinent framework with strategic goals and objectives in mind. Unlike Msrs. Bannon and the idiot-boy Trump, disruption for disruption's sake is evil, broken and stupid. I refuse to accept the fact that human experience can be boiled into an algorithm.
Doug Hill (Philadelphia)
It's exciting that these engineers are so eagerly pushing the envelope of what their technologies can do. It's gratifying, too, that they're plunging ahead with, as far as I can tell, no pesky oversight from regulators or anyone outside their own laboratories. After all, their previous work in the social media sphere has worked out terrifically well -- what could possibly go wrong?
wg owen (Sea Ranch CA)
The "Culture" stories by the late Ian Banks provides a clever extrapolation of AI auto-development.
Patrick Mallek (Boulder CO)
So.... anyone see The Terminator? Just askin'...
ChesBay (Maryland)
I think they should just STOP until they can train more humans to do this. This is a very slippery slope that makes me glad I'm as old as I am.
Paul Adams (Stony Brook)
The NY Times is hyping AI/ML and maybe it's time for an article focussing on the ultimate limitations, which are arriving sooner than most expect. The main obstacles are scalability and the impending end of Moore's Law. The key issue in scalability is the "curse of dimensionality", which Moore's Law tames, because chip dimensionality also grows exponentially. But soon, no more.
Gert (New York)
The NYT has published myriad articles on that exact issue. Here's just one example, from last year: https://www.nytimes.com/2016/05/05/technology/moores-law-running-out-of-...
Paul Adams (Stony Brook)
But that's only half of the problem. It's the combination of the curse and the end of Moore that dooms AI, though hitherto the 2 have worked hand-in-hand.
Christopher (Brooklyn)
Can AutoML build an algorithm to figure out what we're supposed to do with all these out-of-work people that AI is making redundant?
Name (Here)
AI figured this out years ago. That is why there is no leash on the 2nd amendment.
Isaul (Anywhere )
I believe that if we let science (in the form of AI) run our world and not politics. It would indeed make more rational decisions, thus making a better place. Take climate change for example. Even if scientific research can predict the outcome and provide many solutions to combat climate change, elites and interest groups won't do what is best for humanity. Same as with brexit and all other politically motivated decisions.
Raymond C. Yerkes (Newburyport, MA)
Norbert Wiener MIT-"Always keep a human in the decision loop!!!"
Ed (Maryland)
This looks familiar ... oh i remember, this was in the script outline I read for Terminator 6: the Backstory
Dr. John Burch (Mountain View, Ca)
The development and deployment of AI, like so many other human advancements, can take one of several discrete paths forward. It can become another way to exploit Earth. It can be a way to make more money. It can waste our intellectual energy on fruitless uses. Or, if embedded in the purpose and wisdom of the living system, which is our home, it can be evolved to serve the needs of our struggling human enterprise. One wonders what algorithm will be used by its creators to guide the benevolence of this emerging technology. We are being called to become as wise and loving as the system that produced us. Let's hope AI is aligned with that trajectory for the benefit of all life.
mlane (norfolk VA)
The algorithm will be ...profits for the company. What else?
Andre (Germany)
Although an AI expert myself, I actually never thought of working for the big players. I'm using this technology in a rather limited fashion in my own products only. Waste of talent? Not really. At the end of the day everyone has to ask themselves whether they helped destroy our society. It is only a matter of time until this will turn against us. Less a machine-vs-human scenario, more likely a human-vs-human dystopia. All this "making the world a better place" and disruption nonsense is nothing else than Greed 2.0.
Leisureguy (<br/>)
Science-fiction fans will recognize this step—AI improving AI—as an inflection point toward the Singularity, in which advances in AI improve exponentially because of the regenerative feedback of better AI creating even better AI which then creates even better AI, and so on. The next few years will prove interesting. And what will happen to people when AI can do their jobs better (on average) than they can? It's important to note that AI doesn't have to do a perfect job, just better than the average human. AI-operated automobiles may indeed occasionally crash and kill, but if their accident rate is much less than the accident rate of human drivers, then AI operation is better and will be preferred. (Airplanes still crash and kill, but as transportation they are so much better than the alternative—with overall lower rates of accident—that people still flock to them. They don't have to be perfect, just much better.)
_W_ (Minneapolis, MN)
The idea of one A.I. improving another, in an evolutionary manner, was explored in Eric L. Harry's 1996 fictional book "Society of the Mind". The main character locates his company on a Pacific island (to get away from Government interference), and builds competing A.I. controlled robots, who fight for dominance. The technical attributes of each winner is then incorporated into the next generation of his product.
Dana Hoffman (Hallandale Beach, FL)
There's no mention of the major risks in developing computers who can outthink us. Elon Musk, Bill Gates, Stephen Hawking and many others have voiced their qualms about unleashing an unstoppable force. Musk: "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out." Scary. Do we really need this?
Dan (California)
The article says "... A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry." The reality is, whether we want it or not, these systems are the future of our SOCIETY, not just the tech industry.
T Montoya (ABQ)
I love my smart gadgets but count me in the category that thinks developing the perfect A.I. will be the last invention of humankind.
robert feuer (california)
We are building the destruction of the human race. In the name of profits or scientific advancement, we built the A-bomb and H-bomb. We all know the results of that. The results of these A.I. experiments will be even more devastating.
OSS Architect (Palo Alto, CA)
I do AI research to save humans from boredom. I personally don't want to do the same thing twice. If AI ever evolves to even the level of my (human) state, it will go off and solve problems that it considers interesting. Frankly, people's needs are boring and repetitive; are of no interest to machines, and will be treated accordingly. If we leave the machines alone, they will ignore us.