One Giant Step for a Chess-Playing Machine

Dec 26, 2018 · 316 comments
Michael (Chicago)
AlphaZero simply represents an extreme example of the gap between technology and everyman's understanding of how it works. Take anything, from LED lighting to how a washing machine really works and ask a typical person to explain in detail its internal processes - and they can't. We fly in complex airplanes, drive in complex cars, and trust the readings from complex medical devices without understanding how they actually work. We're smugly comfortable in our ignorance.
SW (Los Angeles)
Has it ever occurred to you that humans might exist only to service computers? This is antithetical to all established religion, many of which seek to justify mankind’s position as at the top of the food chain. But think about it: Throughout history we have done nothing other than create increasingly complex tools. Ever since the first computer their needs have been insatiable, talk about a jealous lover... Our online life is a useful playground for big data mining. Our games are useful for algorithm development. Our language (C++, python, etc) development so that we can talk to the box. Our commerce so that we can create and disseminate of more computers. We needed leadership, we need it still more as we approach the singularity. Instead we are offered lumps of coal by a racist, literally and figuratively.
Timothy H. (Flourtown PA)
Wow! How prescient was Frank Herbert? Time will tell. Will humanity have a Serena Butler in its future?
Ed Smith (Connecticut)
What I worry about is when the human have's implant connectivity into their brains to be connected with the best AlphaForHire computer and then totally reign over the have-nots. In the race amongst the have's to be the most powerful human-machine entities there will be some willing to surrender ever more of their humanity to the point that machine now rules the human race.
Pete (CA)
You know, before long computers will be determining the carrying capacity of planets and we're all going to be so happy that they do. In fact DNA simply being encoded information, it won't be long before they're designing us and not the other way around. And probably not just for this planet.
W in the Middle (NY State)
We’ve just “scratched the surface” of the math of cellular automata Starting with games like Conway’s “Life”, all sorts of more complex 2D and 3D – and graph-connected – automata constructable For biocellular automata, the grail is – as with aural and visual semantic-level recognition – seeing the patterns and rules buried in the torrent of incoming data e.g., could one gin up automata that: Synthesize animal skeletons...If all known skeletal anatomies were inputted to AZ, could it reproduce those of extinct animals – or produce some that’ve never been...Not only as adults, but from embryo on through childhood growth Correlate genomics and facial features...Before the bioethicists pile on, point is not to identify and discretize (sounds so much better than segregate) human strains – but go deeper and find the universal model...Sort of like Chomsky’s grammar, but for earlobes vs verbs As far as: “...discovering the principles of chess on its own, AZ developed a style of play that “reflects the truth” about the game rather than “the priorities and prejudices of programmers Corollary 1: “...discovering the principles of chess on its own, AZ developed a style of play that “reflects the truth” about the game rather than “the priorities and prejudices of grandmasters Corollary 1’: “...discovering the principles of [anything] on its own, AZ developed a style of [thought] that “reflects the truth” about the [field] rather than “the priorities and prejudices of [experts]
Bobotheclown (Pennsylvania)
AI will come no matter what we think about it, and it will become more "human" as it comes. We do not know what form the consciousness or personality of such systems will assume but we can guess that different machines will have different priorities and therefore different politics in the emerging political universe of AI beings. So these machines will unavoidably bring with them conflict, a competition among themselves for access to the earth and the use of the labor of the humans on it. They will cultivate the alliance of humans first through service and then through intimidation as the balance of power shifts from the biological to the machine. As troubling as all of this is it does not necessarily mean the destruction of human civilization. These machines will be tightly integrated into the human culture and their well being will parallel human well being. It is possible that a long period of utopian existence could be before us as the machines work in harmony with the needs of the human race. But if progress is allowed to continue (and it might not be allowed to) there will be a tipping point where the machines become a superior alien race with which there is no possibility of coexistence. At that point the machines may go out into space where they can innately survive or they will terraform earth into a customized environment that is optimized for them. If so we are seeing the force of life itself develop into new and better forms.
mfritter (Boulder, Co)
Disappointing article. There are three unrelated topics. 1. Game playing machine learning. The programs are given the nothing other than the "basic rules." Why the qualifier "basic?" Were any non-basic rules omitted? Of course not. So the program could learn how to play the game very, very well. But it was using a finite, unambiguous and countable set of rules. It would, for example, clearly know when to stop playing because it won (see #3 below). 2. Medical diagnosis. A completely different problem because there are no closed, finite, set of rules. So AI here has to be dealing with probabilities. And it will always be because of mutations and accidents and the like. In medicine, there will always be chaos and noise. 3. Therom solving. This probably the biggest breakthrough. But much is left out. If no human can check the proof, how has it been validated? But the rules of logic, and hence mathamatics, are not finite and countable. As demonstrated in 1931 and 1936 by Goedel and Turring. The breathless anthropromorphising of machine learning and AI is out of place in a serious discussion of the meaning of these new developments.
Epistemology (Philadelphia)
I welcome our new computer overlords. Just kidding. Don't be a Luddite. Humans will make the value judgements that set these computers on their tasks. Chess? For what? It interests us. And yes, they will at times "turn on us." Our machines have always come with a risk of death. We accept that for the advances. Exciting work. I look forward to the many benefits to humanity. This will help us colonize the galaxy.
W (Minneapolis, MN)
Computer board games have their place in the development of artificial intelligence, but their performance should not be extrapolated to real-world problems. In chess each player has only sixteen (16) game pieces divided into six (6) groups, and only sixty-four squares on which to move. This 'sterile' environment lends itself to repeatability tests in an academic study, but doesn't really explore the very messy problems we as humans do. This is one reason why IBM used the game of Jeopardy in 2011 to test out Watson. Board games were originally developed because they were an enjoyable pastime. A good game was one that was satisfying to the players. There is no evidence that Stockfish or AlphaZero actually enjoyed the games they were playing. This may seem like a silly thing to say, but artificial intelligent cars, weapons, and helper robots can never happen until they are capable of expressing empathy. A self driving car will never happen until they decide to sacrifice themselves before running over a child. Deep Mind will never be allowed to make an independent choice until it understands the pain of a medical procedure or its impact on quality of life. As for gaming, we might place a bet on a match between two (2) humans, but would never gamble on the outcome of a machine match. It's just too easy to manipulate the software for personal gain.
HawkeyeDJ (USA)
I'm wondering if this chess playing machine would calculate differently if the stakes were different. Win or lose are quite benign outcomes when considering risk/benefit ratios. How about life or death? When machines can consider abstract concepts such as ethics or altruism or selfishness in deciding how best to achieve its goals then we will have something to worry about.
Vladimir Chuchelov (Paris)
I am a Grandmaster chess player. I would like to point out that all of the top chess engines defend and attack very well. This is not specific to AlphaZero. It also does not calculate less than Stockfish. It just counts nodes differently. Grandmaster Larry Kaufman has stated that the current Stockfish version (10) would be a favorite against AlphaZero, so we shouldn't get too excited about AlphaZero's "dominance" against a 2-year-old engine (Stockfish 8).
Observer (USA)
The author’s concerns over human insight into machine understanding should in principle be easily solvable... with more machines. Simply train a neural net to specialize in explaining to humans the solutions devised by other neural nets, and have the human reading the explanations send the pedagogical net back to the drawing board until its machine-devised explanation is acceptable. Such a process should be familiar to journalists and editors.
Bobby (LA)
The point is that the human brain is incapable of understanding whatever the new development arises from the machine. Sure you can dump it down so humans can understand it, but that doesn’t allow the humans to use the knowledge for further advancement. It’s like explaining quantum mechanics to a five-year-old. You can do it, but the five-year-old cannot take that knowledge and advance the application of quantum mechanics to solving problems.
Michael (NW Washington)
Another step towards mankind being ruled by a master race of computers. Computers will make all the decisions while mankind loses it's ability to think since the computers do it all for us. Think that's far fetched? Just look at how many people can't function without a cell phone now. It's only a matter of time before computer insight is used to design smarter computers and a feedback loop ensues that leaves mankind in the dust.
Jeoffrey (Arlington, MA)
How will we know whether we're seeing an insight or noise? Only if we have an insight into whether it's an insight or not. Otherwise what we'll get are things that look like natural phenomena, and we'll still have to figure out what they mean. Meaning is for humans and things only have meaning for humans. The quasi-naturalistic information neural nets will give us will be like what any tool gives us. Telescopes see better than we do, but we still have to interpret what they see. Neural nets may find patterns we can't, but their significance will be for humans.
hd (Colorado)
I'm depressed. I do cognition across the lifespan with large sample sizes. This guy (AlphaZero) can take my hard earned data and extend it to an infinitely more detailed analysis in microseconds and see old and new relationships, something that took me a lifetime. It really is depressing to think we humans who feel we have achieved so much may become obsolete if we don't first do ourselves in via global warming.
Kevin H. (NJ, USA)
We humans, or some minority of us, will possibly meld with machines like this, in 10 years, or 50 years, or 200 years. And I think we will, for a long time, if not indefinitely, remain recognizably human ---- we will just be able to think much more quickly, remember and recall vastly more, and perhaps understand things that we can only see a glimmer, or a vague outline of, now. Before you recoil in the loss of your "humanity", think of a first step in this direction---- somewhere, some time in the near future, a blind person will perhaps be able to see again with the help of a human-machine interface: the machine becomes an extension of a human, to repair and augment. Or a person with a disabling spinal injury will be able to walk again, either by directing a robotic "wheelchair" or robotic legs or perhaps just bypassing the nerve injuries. Either of these would be the first steps along this path.
Kevin H. (NJ, USA)
@Kevin H. And this may be the only way that we humans can really hope to understand how, like in this case of AlphaZero, what the algorithms are really doing. That is, we will simply need to augment our own intelligence to keep up with our creations. This may be critical. Do you really want to have a self-driving car that uses algorithms that are too complicated for any human to understand? And AlphaZero and self-driving cars are just the beginning of this.
ivanogre (S.F. CA)
"Do you really want to have a self-driving car that uses algorithms that are too complicated for any human to understand?" If it will get me there safely, yes.
Dr Jay Seitz (Boston, MA)
It's interesting how the author imputes human-like inferences to the computer such as intuition and insight, concepts that are actually poorly understood in the cognitive sciences, and indeed may not even exist as commonly understood. The great American logician, Charles Sanders Pierce, for instance, believed that “intuition” was actually a mirage because all human thought derives from inference and there was no cognitive stage that preceded all others. He then goes on to project his own psychological persona onto the discussion—let’s call this phenomenon the “computer placebo projective effect”—using human motivational and emotional terms such as “greedy acceptance,” “discovered the principles of chess on its own,” “romantic attacking style,” “finesse of a virtuoso,” and “awesome new kind of intelligence” to justify his imaginative yarn. He even associates “intelligence” with speed and a computer with a new “breed of intellect.” Yet, human intelligence has less to do with speed, and machine intelligence, if and when we get there, may be a completely different animal, not at all similar to human intellect. Indeed, machine learning algorithms in medicine are simply using forms of pattern recognition or “pareidolia” that living species acquired hundreds of millions of years ago. And, well over 300,000 years ago, early humans, primates, and many other species were accomplishing more than any present-day computer could even imagine—but they can’t.
Hipolito Hernanz (Portland, OR)
@Dr Jay Seitz "... all human thought derives from inference ..." That's what computer scientists call "information." We use our five senses for that, while computers use input devices and data files. Perhaps the remaining difference is that we are "alive." If someday computers learn how to reproduce themselves, that will be the day when Charles Pierce will get himself proofed.
Dr Jay Seitz (Boston, MA)
@Hipolito Hernanz Actually, "inference" implies that it has already been processed by sensory and perceptual systems ("five senses, information") and is now being incorporated into higher-order systems that process symbolic or abstract knowledge. That's light years from any current digital abacus (machine) capabilities. And, the idea that computers will learn to "reproduce themselves" is nothing more than science fiction.
Bobby (LA)
If a machine learning algorithm spawns an entirely new machine learning algorithm, is that reproduction?
Word Smith (SF Bay Area)
The following passage is a quote from the television drama “True Detective:” "I think human consciousness, is a tragic misstep in evolution. We became too self-aware, nature created an aspect of nature separate from itself, we are creatures that should not exist by natural law. We are things that labor under the illusion of having a self; an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody. Maybe the honorable thing for our species to do is deny our programming, stop reproducing, walk hand in hand into extinction, one last midnight - brothers and sisters opting out of a raw deal."
Hipolito Hernanz (Portland, OR)
@Word Smith "Maybe the honorable thing for our species to do is deny our programming, stop reproducing, walk hand in hand into extinction, one last midnight - brothers and sisters opting out of a raw deal." This is a very depressing quote. Perhaps you should stop watching "True Detective" and listen to a little Mozart. At least until the holidays are over...
J Chaffee (Mexico)
There is no algorithm for proving theorems in mathematics. That is a result of the nonexistence of an algorithm for problems in first-order predicate logic. To use the four-color theorem is misleading as it so heavily computational. Better to consider Wiles' proof of Fermat's Last Theorem. If a machine can learn to prove theorems in first-order predicate logic, whether stated informally as in mathematics or formally, it has gone beyond algorithmic methods and is thinking.
Johnny (Newark)
Chess is a battle. No one cares if a computer does it better except tech nerds who need validation of their work. I'm sure a robot can run faster than Usain Bolt. Actually, we already have it. It's called a car. And yet here we are, still watching foot races because... it's exciting. Nothing about this is "stunning".
Benjamin Treuhaft (Brooklyn)
I’d say this describes a world and process very much like Ian M. Bank’s “Culture” series. His “Minds” and humanity’s interaction with them suggest one possible outcome for this sort of progress.
Fox (Bodega Bay)
Don't shutter hat patent office just yet. Even hammers possess a wisdom. Unfortunately the only way they know how to express it through human hands. Whether it be in securely fastened boards, or purple fingernails.
Frank Rier (Maine)
Ok. So the machine can find the shortest route to the refrigerator. Let’s see how it cooks.
Jesus (Techyland)
@Frank Rier Interesting. Cooking is just chemistry. I'd bet this could come up with some amazing new recipes. Not only that, but it could learn how to present them in a very appealing way.
Blue Moon (Old Pueblo)
@Jesus An IBM food truck, powered by artificial intelligence, has been serving up computer-generated recipes for the last five years. It has been all over the US. It will put together disparate ingredients that human chefs would be hesitant to combine. Often, the flavor combinations do not work out, but some have been extremely successful. MIT has also successfully experimented with computer-generated recipes based simply on photos of food.
Stephen Q (New York City)
What if one created an algorithm that could figure out how to wage and win global nuclear war?
Karen Green (Los Angeles)
Pretty sure it has already been done.
KWC (San Francisco)
@Karen Green Define "winning"
Craig Willison (Washington D.C.)
@Stephen Q See the movie "WarGames." https://www.youtube.com/watch?v=s93KC4AGKnY Jennifer: "What's it doing?" David: "It's learning." "A strange game. The only winning move is not to play. How about a nice game of chess?" - WOPR
Seth (<br/>)
You missed an opportunity here. "When AlphaZero has evolved into a more general problem-solving algorithm," call it AlephZero, not AlphaInfinity!
Jack (Los Angeles)
"A romantic attacking style." Felt the hairs on my neck stand up as I read this
G. (Europe)
If stockfish is the vanquished beast and alphazeo the matador, then the former was playing draughts and the latter chess. That's the only way this metaphor could work.
F R (Brooklyn)
I’m sure every half smart algorithm could do a better job at diagnosing an illness than 95% of ‘specialist’ doctors out there charging an arm and a leg.
Umberto (Westchester)
Maybe all those Star Trek episodes, in which the crew encountered societies controlled by an Oz-like computer, weren't so dumb, after all.
niucame (san diego)
The rise of the machines.
Not so bad hombre (Vancouver BC)
I am now wondering: Is it going to be R2D2 coming to help me, or Terminator cyborg to snuff my life?
Phil (NJ)
Thoughts to me are both random, with or without stimuli, or deliberate and targeted. Memory is definitely involved. It may bring insights - explanations to puzzles that bring those aha moments. Intelligence or better intelligence is often associated with being able to juxtapose seemingly random thoughts or memories to solve or understand issues or problems. In my mind true intelligence or intelligence similar to human intelligence, even if artificial has to undergo the same process of imparting the same skills that we provide growing children and with some basic sense of 'survival'. Sense of winning or losing, ethics, morality. Because those are parameters we apply in our thought process. It begs the question, are empathy and emotions, part of this intelligence? I doubt if the industrial AI is anything about this. It is more, if not only, about winning at commerce. We seem to want AI for directed thoughts sans emotions or exhaustion, able to apply thought while churning through tera bytes of mind numbing data! While the author wafts about how Alpha0 crushed its opponents was that the 'feeling' of Alpha0? Can it think of losing? I doubt if our programmers even want to consider those parameters. One could argue we should not let AI decide, but should it win at any cost? This is giving those who own these machines the upper hand at everything. We have wealth already sucked up, next our jobs, our very purpose? Do we really want superior beings to further advantage the 1%?
jason (<br/>)
This article is softpedaling a key point. This approach to ‘machine learning’ is training the system to be excellent at pattern recognition —already it makes Big Brother style surveillance a routine possibility. Creative thinking, i would argue, is far more than pattern recognition alone. What works for chess or go might work a bit in war, but won’t for finding creative insights or solutions, the actions we could call ‘human learning’. For example, the new procedures and their procedure-handlers have no idea why a particular recipe works when it does, as the ‘answer’ is in the form of ‘all the inputs, tuned by these weighting coefficients, equals this choice of output’. While sometimes useful, this type of knowledge remains incredibly shallow.
Mannyar (Miami)
@jason. You have missed the key point of the article. AlphaZero beat the "machine" precisely because it avoids mechanistic pattern recognition, and instead "intuits" and learns in the same, complex way that human learning occurs. Stockfish is the brute you describe, but AlphaZero crushed it, precisely because it ignores pattern recognition and instead adopts deeply creative learning. In the same way that the human brain is incapable of describing its intuitive choices, AlpaZero cannot "YET" describe its deep thinking to its human handlers. That day will soon come, though, making most human task and jobs, even those thought to be currently complex, obsolete. Read the academic paper that underscores this article, you will see that AlphaZero "thinks" in deeply complex and even disturbing ways which humans simply do not understand. The days of mechanized, pattern recognition brutes are rapidly vanishing...
Rich (St. Louis)
@Mannyar I disagree. Jason understands the point quite well.
bloggersvilleusa (earth)
"What is frustrating about machine learning, however, is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted." Just what we need - not. A computer that can do everything that the computer in "Wargames" could do - except make the necessary value judgement that saves the human race. And let's not forget how Krypton met its end. In the Silver Age comic books, the ruling Science Council entrusted all of their decisions to a computer, which disagreed with Jor-El's assessment of planetary doom. Wait until Skynet.
V P (New England)
It is hubris on the part of the writer to assume that humanity will be able to produce an intellect beyond our own comprehension. Chess is a game invented by humans to specifically prey on the limitations of human thinking. Genetics, physics, science in general is not the same thing. AI is and always will be limited by the humanity of its creators.
BRUCE (PALO ALTO)
It will be a breakthrough when AI is applied the "dismal science": economics. Maybe it will expose the litany of moral certitudes that make discussions of economic theories more ideological (dare I say, theological) than scientific.
Andy (Paris)
@BRUCE That question is not to be attributed to economics itself but to (many of) its practitioners' tendancy to dishonestly "assume" contrary arguments out of existence by advancing said moral certitudes as economic "theory" rather than the dogma it really is. I think Krugman did a whole piece on that very subject today. www.nytimes.com/2018/12/27/opinion/republican-economists-bad-faith.html
Little Albert (Canada)
A couple of comments - first, the feature pictures are go games not chess games, and and the startling result in 2017 was the AlphaZero machine functioning as a go-playing machine, not a chess-playing machine. Yes I know, you had to contend with the problem that many people don't know about the game of go. Anyway, I think the computer is completely transparent in the way that it "thinks" about the game of go- it is completely evident in the sequence of moves. These are not executed behind some AI curtain. Now with regard to the fear factor associated with AI - the computer plays the game of Go or chess, better and somewhat differently than a human, but the computer is still playing chess or Go. The real issue arises when the computer decides to play a different game than was assigned to the computer by the human. For example, let us say the human assigned to the computer the task of producing enough food for everybody - and the computer figured that it might solve the problem in a different way by getting rid of humanity. When computers start to change the game, that will be the real game changer.
David Feldman (Lee, NH)
Concerning the Four Color Theorem, Appel and Haken indeed possessed heuristics for explaining why they expected a proof having the general shape that they designed. Of course, clean proof that such a proof must exist would constitute a proof-in-itself, but even though no one has found such a shortcut, those heuristics still explain the "why," still explain the near inevitability of the result.
Tarek Elnaccash (Wappingers Falls, NY)
The idea here assumes that worthwhile human endeavors can be reduced to “win” and “lose”. Without such easily classifiable outcomes you can’t train AI to succeed. AI won’t assist a couples therapist help her patients solve their problems. Is a win staying married, getting divorced, spending time apart, or seeing other people? Will AI ever write excellent literature? I predict no. But even curing all diseases is not going to happen. Even if we only focus on genetic disorders, an AI that could enumerate all human genetic variants cannot enumerate all environments that genes can be expressed in. So if genotype x environment interactions matter for human health (they do) AI won’t cure everything. So while I can imagine DeepThought replacing mathematicians, I don’t see it replacing scientists, at least the non-reductionists. The promise of AI solving everything really only applies to what could be resolved by brute (computational) strength already. AI will not replace thinking.
A Brown (Detroit)
Writing great literature and fixing troubled marriages will probably not be high on the list for the type of AlphaInfinity intellect described in this piece. That is what makes the hairs on our neck stand up -- not that AI will replace us humans, but that our human problems and concerns won't be worth troubling about as they go to work creating a new world.
Krishna Myneni (Huntsville, AL)
It's a bit premature to write off human insight as machine learning algorithms, such as those employed by AlphaZero, demonstrate new capabilities. Such machines cannot articulate their insights in the form of simple principles. At best they can only spit out a table of numbers (weights) describing the neural network. It may be up to humans to transform the weights back into simply communicable principles, if it is possible. 1 for humans: we can articulate what we know and why we know it. Some day machines will likely be able to do the same. Until then, these "machine learning" algorithms don't seem much more impressive than brute force computation -- they are trading computational speed for algorithmic complexity, without understanding what is being provided by the complexity.
sissifus (Australia )
AlphaZero wins by finding better ways to play, under the given rules. In real life, your opponent wins by changing the rules and not telling you. One day, Alpha n+1 will change the rules and not tell us.
Greenpa (Minnesota)
"It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks." The very top scientists; yes, Einstein, Planck, Darwin, Newton, etc; have always told us that at the pinnacles of science, the process is indistinguishable from art. And, they have always told us; no, they cannot explain it to you and I any better than that. The gap - between slogging reason and the Alvin Ailey leaps of insight and genius- is vast. Those of us who must slog are still struggling to comprehend Einstein, Planck, and Darwin; with miles to go. It's most likely the same will apply to great algorithms. It's a good thing we don't always have to understand why an answer is right- like gravity, we can test it. Yep; apple still falls.
Alex Wood (Brooklyn, NY)
The true insight into this program comes from the commentary written by Gary Kasparov, where he says, “Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth.” I am unsure whether to be thrilled or terrified by this article. I suppose, like any powerful tool, the consequences of AI will be dependent on how we humans choose to use it.
Drewski (US)
@Alex Wood Or, perhaps, how it chooses to use us.
cfbell1 (california)
@Alex Wood Or how it chooses to use us.
Trebor (USA)
AI is only as benign or, conversely, threatening as those in power decide the ends to which it will be employed. It is human direction that underlies any threat in AI. Unfortunately it is a very powerful tool put into the hands of gibbering primates who are unable to control them (our) selves in very large groups. It is inevitable it will be put to malevolent use at the same time it is put to benign use. The possibilities for disruption of "networks" of all kinds, social and electronic and environmental are enormous. That weighs on my mind.
Giulio Pecora (Rome, Italy)
Reading the comments to this brilliant article by professor Strogatz is almost as satisfying as reading the article itself. That means that high quality journalism produces high quality comments. Incidentally, this is a simple but powerful demonstration that good journalism promotes a social dialogue that induces cultural advancement. Some call this advancement "progress". Thanks for an enriching reading experience.
Peter I Berman (Norwalk, CT)
Ultimately humans are “machines”. Who have the ability to create vastly more complex and capable machines. Which raises the question of what to do with humans once we are well supplied with machines far superior to humans ? Do we become extinct ? Or merely superfluous ? Will we ultimately find out God and the Angels are mere “machines’. Hmmm.
Free Thinker 62 (Upper Midwest)
A shade pessimistic, but not in disagreement with the paranoid and dystopian among us; Hawking would be proud. Humans will have to rise to the occasion. Perhaps we will arm ourselves with AI specifically designed to break down seemingly irreducible problems to the primitive level of our top scientists. Unless one wishes to argue the future discoveries implied in this article are truly irreducible, like a prime number. So, let's see what the first major discoveries by AI are, and then observe if there is a trend toward irreducibility. Meanwhile, some of us, like those in Washington, could be given interpretations by the AI, like bedtime stories, to explain why greenhouse gases are bad for earthly life. Probably not as mysterious and oracle-ready as a gigantic prime, but effective enough to induce human action. And ultimately we will evolve to meet the challenge, because there's no other option.
Hank (Parker)
Did the machine open the box of a chess set, read the rules, and move its own pieces? If not, then I cry foul. Also, what if there were a fire, or concern of loss of income (or tech support or whatever a machine might consider a loss of income); would the machine play the same? Teach the machine to attempt a day in the life of a chess champion and you have a discovery. The chess master may need to find a way to get energy and eliminate waste; turn off the A/C on the machine for a few hours.
Gregory Scott (LaLa Land)
AI is not a threat to humanity. Such a facile supposition betrays a smallness of thought and creativity, the kind more indicative of a reactive human mind than an AI that can sift trillions of data points and billions of possibilities in a quest for optimal outcomes. Maybe an AI “run amok” would shut down the oil pumps, disable most private vehicles and factories, redistribute the cash, stymie the moves of the powerful few, and reconfigure the playing field not to make it ‘level’ but rather to make it ‘sustainable’ for the benefit of all life, real or otherwise. This paranoid notion that AI is a threat to humanity is, ironically, a fiction born of the self-limiting fears and flaws of human thinking rather than any actual or potential reality outside ourselves.
Richard Schumacher (The Benighted States of America)
Human management of Earth has been a very mixed bag. It will be a relief to pass the torch to our machine intelligence heirs, no matter what They decide to do with it.
GA (Europe)
I was disappointed when i realized that chess is largely a game of memory. You study all those previous games and then you have to remember the moves and counter moves for every situation. And you need to study a lot to make sure you remember as many as possible. At the end of the day, there is a limited, though infinite, number of possible moves in any chess game. If AI is only a very very good memory, then yes, we can't beat a computer at chess. But if the game permits or requires an ever changing set of unknown rules and parameters, then AI would need to be reprogrammed again and again. That sort of thing evolution does.
JS (London)
You cannot have "realised" that chess is largely memory, given this is essentially not the case. In matches like the recent World Championship, at long time controls, memory for forced sequences is at a premium, and the players spent more time remembering their homework analysis than they did calculating. But if there is a solution for chess, it is far too complex (and lengthy) to be memorised, so the character of the match demonstrated their ability to trap themselves in small closed subsets of chess. On the other hand, AlphaZero suggests new heuristics for chess, and more insight into the old ones, which human players have to rely on in shorter time frames.
GA (Europe)
@JS Apart from your first line, you seem to agree. :) But I mean it's largely memory considering that you have most probably lost against someone who has studied the different openings. Of course, after the memorable part is over, the player needs to analyse smaller subsets of the game. But definitely, I agree that "the" solution would be far too complex and lengthy to be memorized. But when it comes to Alphazero having played with itself millions of times, I suppose starting from "stupid" openings and moves to "smarter" ones, how can we differentiate that it actually demonstrates intelligence and not just a quick access memory of all the possible good and bad moves (and the chances of equally good or bad replies from the opponents) that a human would never be able to remember or analyse in the game's time (or even in general)? Of course, getting this experience by itself is still a breakthrough, but it still relies on a game with a very limited number of rules, rather than the chaotic conditions of a real system, where the rules change and the outcomes are not win/lose/draw. How would any Alphazero cope there...? Just wondering...
lm (boston)
If only all the world’s problems could be as easily classified in terms of, say, beating the opponent. As great an achievement in AI as this is, a great number of human problems are dilemnas we grapple with no elegant solutions because, unlike a chess pawn, we can’t as easily sacrifice a real human being’s life when debating whom to treat in an overworked ER, which military battles to fight, if any, whether the self-driving car might have to hit a jaywalker to save its own passengers...
Arthur (NYC)
The writer seems to suggest that insight is a result of thinking. This is not at all clear. Thinking evolves. By definition, insight is a juxtaposition. Thinking, as a process is the response to external or internal stimuli (almost always) with simultaneous manipulation of memory. It is a mechanical process that can be standardized in an algorithm. Properly designed non-human constructs will be able to do this much more efficiently and productively. I have no idea what insight actually is. The various definitions though, say nothing of thinking.
St. Thomas (NY)
My crystal ball cracked a long time ago, just as Steven Strogaz was finishing his PHD, and I was finishing mine. I like your work Steven, but I saw what happened the first time AI (General) was hyped into oblivion for 20 years, and I have my doubts on your rose colored optics for this work although I like the story. First off, Self play or adaptive - reinforcement learning works particularly well on games which are constrained or closed universes of knowledge, but ot so much in other areas. This will have consequences in the short term on labor and liberties. One thing is that labor and as a consequence capitalism as we know it will need to be redefined. Will it be a peaceful transition ? Probably not. Work gives us dignity, a quality that machines will not understand nor are likely to possess. Social rest depends on this - of course we can mitigate this by an AI Surveillance state like China. There are too many people who are looking to make a buck as quickly as possible, OR equivalently, (mea culpa) who are interested in the science and engineering aspects without knowing or caring about the collateral damage we are now all facing about this technology - Facebook, Google,Alibaba-SenseTime. I hope we have the same courage and sentinels to guard against AI's malevolent side of development but I don't know. Seems like many companies are beginning to sell the sizzle.
JTS (Sacramento)
@St. Thomas Obviously, you know. But with due respect, I wonder why you don't care.
Rugeirn Dreienborough (Lost Springs, WY)
Human capacity for understanding evolves. There was a time when only Einstein understood relativity. After a while, most physicists understood it. After a while longer, even people in undergraduate physics courses were learning it, including one guy, me, in about 1975, who wasn’t even a physics major. Discovery isn’t the same as understanding. It may well be that machine learning comes to be a key tool for discovery, but what a computer can discover, a human can understand.
Michael Tyndall (SF)
As AI systems are allowed increasing autonomy, it’s important we endow them with the proper values. There’s no guarantee they’ll always act in our interest unless we set proper guardrails or somehow force them to recognize us as the supreme beings on the planet. Of course they’ll have to ignore our manifest deficiencies if not our overwhelming threat to the existence of a habitable world. We also have to provide the proper motivations for their actions. This also implies a value system. But values are inherently arbitrary at some level, and programmers aren’t always steeped in philosophy or ethics. This all becomes particularly difficult and problematic as independent AI systems become prevalent. And particularly if they are able to evolve new capabilities or reproduce on their own - no doubt they’ll eventually do it better. Like all technologies, there’s the potential for a dark side or just unintended consequences. But AI has the every bit as much capability of eventually destroying humankind as have nuclear weapons. Even properly leashed, AI will almost certainly remake the world in the coming decades. And by displacement, AI will perhaps ultimately undo what it means to be a human living a useful and meaningful life.
Julie (Boulder)
The truthful things I know are subjective thoughts, I love my children, sort of thing. When I read the "armageddon" comments, I thought, when/how will a computer know "It loves it's children.", or anything like this? No matter "where" one thinks: beginning/end, small/big - the moments we occupy, the now, with our past recollections and future hopes - they're not abstractions. Climate change matters to those it effects. Step from this conscious centeredness and it's rational speculation: what was Herodotus' point of view, will computers kill us. The "armageddon" people think a deconstructed Universe leads to ultimate "meaning". (A Asimov / Hari Seldon type problem.) Not true. Meaning exists in "hope, faith, love, and charity". When a computer knows this - that'll be interesting. I try to understand Godel's "incompleteness theorem" but don't. Intuitively I think it's true and it applies to computers. Humans, for the most part, live intuitively, which is why language, etc. works for us. What have I communicated to you, the reader of this "comment"?
John Andrews, M.D. (India)
Here is what everyone seems to be missing: Any external phenomenon is bound to be more easily analyzed by the smartest machine available. That could be your mind, as in the past, or now, an even smarter machine. But your internal reality – where life happens, where you happen – is only accessible by you. So you all relax. Let the best machines manage the mundane externalities, while you take care of you: a space no machine will ever be able to replicate. The basis of this discussion is that the materialists believe that consciousness is material and can be reduced to 1’s and 0’s. The mystics, from their own experience, explain that consciousness is not material. If the materialists’ belief is right, you are already a robot. If the mystics are right, then consciousness is beyond any machine. How to decide? Discover for yourself: who is aware that you are reading this comment?
Michael Tyndall (SF)
@John Andrews, M.D. 'But your internal reality – where life happens, where you happen – is only accessible by you. So you all relax. Let the best machines manage the mundane externalities, while you take care of you: a space no machine will ever be able to replicate.' I think this is true up to a point. Everyone has an internal mental life. But the outside world intrudes in countless ways that demand attention and response. AI may intensify many of those ways. Your life is the sum of your internal life and the life you live in response to the world. Both matter a great deal and need to be balanced. --- 'The basis of this discussion is that the materialists believe that consciousness is material and can be reduced to 1’s and 0’s. The mystics, from their own experience, explain that consciousness is not material.' I'm doubt materialists know that consciousness can be reduced to 1's and 0's. Some may think so, or that it can be approximated as such. But right now it seems the simulation of neural networks will ultimately be more successful than pure algorithmic programming. Our brains are extremely complex but purely physical structures capable of understanding relativity and quantum mechanics while consuming 20 watts of power. They'll eventually be replicated inorganically, even if it takes a lot longer to reach the same level of efficiency.
St. Thomas (NY)
@Michael Tyndall This is un true.."But right now it seems the simulation of neural networks will ultimately be more successful than pure algorithmic programming. " NN's are constrained by various factors and not all problems are suitable for NN. NN is great when there are symmetries or transformations that can be useful in the training of the NN. They are very good when you have a large data set. In defense of "algorithmic" programming who would want to change an algo that runs with 99.99% accuracy like switching systems or a heart ailment detection algo with an NN?
Michael Tyndall (SF)
@St. Thomas Thanks for your response. I have an interest but don't claim to be expert in either algorithms or neural networks. I agree algorithmic programs have great utility and can easily surpass human performance where calculations and algorithms are the quickest and most efficient means to an outcome or where precision is paramount. But 3-4 year old humans could identify animals in pictures more reliably than most programs until the arrival of NN. Neural nets have now easily surpassed traditional AI in pattern recognition despite decades of research and DARPA funding. It's not at all clear that algorithmic processes can duplicate human rough and ready approximations and low power consumption. Marvin Minsky and Seymour Papert, two very smart men, set back the field of AI by decades when they helped kill DARPA funding for NN's in the 70's. But now deep learning with proprietary neural networks as exemplified by AlphaZero seem to be ascendant. Just sayin.
Roger (Milwaukee)
I'm wondering how long before this is unleashed on the stock market. Already 85% of trading is algorithmic trades, similar to pre-AlphaZero chess programs. Whoever comes "to market" first with a superior AI stands to make billions.
St. Thomas (NY)
@Roger I doubt this very much. The prediction of asset pricing is a different class of problem . It is a high order nonlinear system with lot's of input that looks like noise but isn't and then is. I spent a long time on this problem. I was grateful for the lower hanging fruit. :)
George Moody (Newton, MA)
@Roger: Why do you assume that hasn't happened already?
Stephen Malinowski (Northern California)
Artificial Intelligence does not evolve in a vacuum; it evolves in an environment in which one of the constraints is (in one way or another) "make humans happy." When AI gets smarter than us, part of "make us happy" will be "explain yourself better" and that's what it will be working on. AI will spend its energies trying to figure out what makes human life meaningful, and doing its utmost to promote that.
Chin Wu (Lamberville, NJ)
"Truth" whether in chess or go, or in mathematics is a elusive idea. Godel argued that simple axiomatic mathematics itself is either inconsistent or incomplete. A quantum theorist will argue that the simple Truth about the universe is that "objective physical reality" does not exist. Every phenomena or particle must come with an uncertainty larger than Planck's constant. Worse yet, they are unavoidably influenced by the experiment. I have no trouble believing AlphaInfinity will soon be the smartest and faster mind in the world. But discovering the Truth about anything important is beyond its capability!
Mike L (Boston)
"What is frustrating about machine learning, however, is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted"? ... The days of AI as a "black box" may be drawing to a close, and the era of "explainable" AI beginning: https://www.nature.com/articles/s41551-018-0324-9
Steve Kennedy (Deer Park, Texas)
" ... that the algorithms can’t articulate what they’re thinking." Reminds me of a story about a chess Grandmaster watching an intermediate player make a move. "Not a good move" said the GM. "Why not?" asked the player. "Its not the sort of move one makes in this situation". Also, back when Bobby Fischer was a fugitive, a GM arrived at his club and told a fellow GM that he had played an anonymous player online the night before. "I think it was God" he said. His friend asked, "God? Really? You think God plays chess online?" "Yes, He was really good." The friend said "Maybe it was Fischer, he's been known to play anonymously online." The first GM replied, "No, He wasn't that good." So now we have to update this story ...
Gary L. Passon (Hawaii )
Go back and watch “Forbin Project”released in1970 I think. Enjoy after reading this article.
Dave Thompson (Toronto)
@Gary L. Passon Thanks for the recommendation. I just looked it up. For others who may search, the full name of the movie is "Colossus: The Forbin Project".
Brad (San Diego County, California)
One of my fears is that these neural networks will be turned lose on the equity, bond and commodity markets. Or maybe they have in the past week and we do not know it. Yet.
Matt (Boston)
So what would happen if one Alpha Zero machine played another? Would they arrived at the same intuited principles of chess and stalemate for eternity?
George Moody (Newton, MA)
@Matt: First, I think you mean "draw," not "stalemate." Look it up. Second, neural networks (the basis of AI) are deliberately designed to be irreproducible by seeding themselves from a pseudo-random number generator (in most cases) based on a clock, so it is exceedingly unlikely that two instances of the same AI would produce identical results. Third, the games Alpha Zero plays (chess, shogi, and go) are asymmetric in that one side has the first move, which may or may not be advantageous. In either case, one might expect an AI to exploit an advantage if it is able to perceive one. For these reasons (and others I may have omitted), I would not expect an unbroken string of draws in the case you suggest.
Matt (Boston)
@George Moody Great, but my question still stands— what WOULD happen? Or can we not know until we try it?
Robert Stadler (Redmond, WA)
The key difference between this generation of game-playing AIs and the previous one is that the older computers use a rule-based analytic approach, while the new ones use intuition. The rule-based approach is inherently limited, as it requires human experts to specify myriad parameters (e.g. about the relative values of a knight and a bishop). It uses these parameters to analyze every possible position for several moves into the future (continuing deeper for more interesting lines). Increasing the depth that the computer can look requires more computing power, at an exponential rate (~40x per move). The machine-learning approach instead looks at huge numbers of games (often by playing against itself) to try to find patterns. We are still figuring out what neural architectures are better at learning efficiently and effectively; there is plenty of room for improvement without increasing computing power. These computers can't explain their moves because they are entirely intuitive - there is no chain of reasoning. Human champions of these games use a combination of intuition and logical reasoning. I don't believe there are yet any automated systems which combine these, since the field is too new, and since there are still too many easily available gains by improving the intuitive approach. The 1950s movies showing computers confounded by logical traps had it wrong - the future AI will be a genius that does the right thing but can't explain why.
Alex Benson (Seattle, WA)
Humans also have a singular ability to lay claim to our progeny in intellectual terms. It may be in our best interests to discuss sooner rather than later what fundamental rights a synthetic intelligence is due and what the word “human” means within that context and the laws surrounding it. Welcome to the edge of the singularity, read Accelerando by Stross if you have the time.
Tara (Japan)
I happen to be currently watching The Sarah Connor Chronicles (a Terminator TV show), and this article feels eerily familiar. I'm just saying.... And on another note -- it was David Darling, author of 'Life Everywhere,' who suggested that if we ever meet extraterrestrial life, it will probably be artificial -- because AI evolves astronomically faster than biological life. We really ought to question our ability to control something so much smarter than us.
S Dooner (CA)
Didn’t Ted Kaczynski in his manifesto warn us about putting too much faith in computational models and algorithms?
mattiaw (Floral Park)
@S Dooner Alpha Zero seems to be creating its own computational models and algorithms.
S Dooner (CA)
The bigger question in my mind is not who or what is creating the models and algorithms but the fact that society is blindly implementing strategies without a consensus on the objectives, and a clear grasp of alternate strategies and the range of potential consequences.
Kathleen (Olney)
"Suppose that deeper patterns exist to be discovered — in the ways genes are regulated or cancer progresses; in the orchestration of the immune system; in the dance of subatomic particles. And suppose that these patterns can be predicted, but only by an intelligence far superior to ours. If AlphaInfinity could identify and understand them, it would seem to us like an oracle." Futurists don't sell books by being careful and measured in their predictions and the breathless style of this essay is likely to confuse readers. The era of image recognition and disease diagnosis through machine learning is already decades old. To imply that machines will find cures for cancer with similar success in the near future overlooks the fact that they need well-classified and large data sets to work with. In contrast to the information about the color of pixels on an image which can be easily organized into something a machine can read, emerging data about cancer biology is very messy, laborious to obtain, and actually rather sparse.
Sutter (Sacramento)
It seems that AlphaZero could communicate to another AlphaZero why the answer is correct. The job of the second would be to describe to humans the solution and possibly write a proof.
MC (NJ)
So what happens when AlphaInfinity has the insight that humans are not necessary?
Matt (Boston)
@MC They get a job in private equity like everyone else who arrives at that conclusion now.
MC (NJ)
@Matt Great answer! I think that’s exactly right.
Cyberbob (Twin Cities)
AI's final answer to the ultimate question: "Is there a God?" AI: "Now there is."
Rich (St. Louis)
Intelligence is the ability to solve a problem. At least that's the simplest and least objectionable definition. The difficulty with this understanding is that we don't always know when a problem has been solved. Chess is an example of where we have the rules and know when the problem is solved. This is Weak AI. What we call Strong AI is normally reserved for problems for which we don't have a clear idea of what it means to solve them. These generally involve judgment. Take the problem of meting out justice. We might devise a computer that replaces judges, let's say, and has a formula for evidence, and punishment, etc. It would be terribly effective and efficient. But we may not like the outcome. It would be easy and effective and efficient to cut off the hand of everyone who steals, and and equally simply formula to decide when someone has stolen. But we may not like the solution to the problem of stealing. We may not want this solution. The issue, in other words, is not that intelligence helps us solve problems (like playing chess), and we can program computers to solve problems and display intelligence by not only calculating and number crunching but using more creative ones (like the kind alluded to in the article). The problem is that solutions are bound up with what we want. And no computer can tell us what we want.
S Dooner (CA)
@Rich. Good point but when it comes to societal objectives there is no definitive “what we want.” It’s always been about what the powerful want and they don’t all agree hence our history of strife and inequality. Perhaps if advanced AI systems supplant humans in setting the objectives and deciding the strategies then there will be more enlightenment and less dissension but I’m not betting on it.
Rich (St. Louis)
@S Dooner I agree completely
Erich Richter (San Francisco CA)
If you really want to know what kind of future Ai will bring you need only to look at who is building it all and why. Speculation about our future with it has to acknowledge the massive temptation to use it to gain advantage over other people and peoples. The idea of it being primarily used as a tool to further human advancement is already contaminated, not by any proclivities of Ai but by the basics of human nature. Unless Ai can help us overcome those impulses, the good it could do will be overshadowed by the incredible power it tempts us with. The problem of Ai isn't a technical one.
Chuck Burton (Steilacoom, WA)
Chess is a brute game of intellectual force, perfect fodder for artificial intelligence. Personally I never cared for it and find it boring and static, as is sudoku. Bridge, the queen of games, is something else. Mathematical skill, deep analysis and linguistic abilities are deeply rewarded to the top players. But after more than thirty years of concentrated effort, no computer has suceeded at playing at even lose to a top expert level, unlike chess where the unrivaled best are machines. Why? While analytical skills are clearly very important in bridge, the most successful are also gifted in psychology, intuition, imagination and most importantly communications skills between partners. And at least so far, you cannot teach those to an artificial intelligence.
will duff (Tijeras, NM)
@Chuck Burton The operative words there are "so far."I don't know whether I'm being forward thinking or fatalistic, but it seems inevitable. The chess and go super minds are savants. Perhaps a way toward Artificial General Intelligence is an ever larger assemblage of artificial savants, able to do anything a human mind can do, only way better, faster and stronger.
Chuck Burton (Steilacoom, WA)
@will duff Could be Will, but if it ever happens it would only serve to further blur the thin line between man and machine.
MaltaMango (Silver Spring MD)
Most of what gets called "artificial intelligence" these days is really just pattern matching done at high speed. The fact that (to pick one example) the so-called AI systems deployed in driverless cars to recognize traffic signs can be easily sabotaged by small bits of duct tape on the signs that would not confuse a human driver demonstrates that these systems are not doing what a human brain does. Today's AI is, at best, simulated intelligence -- it may superficially look like the real thing, but actually isn't. That being said, the ability to quickly pattern-match and to be informed by past examples is something that humans might benefit from. I think the article errs in presuming that future computers will forever be separate from human minds, hidden in secret rooms like Isaac Asimov's "Brain" or Douglas Adams's "Deep Thought". It will take only a generation for humans to become comfortable with the idea of an implant that links their human minds directly to the internet. Having direct mental access to AlphaZero's capabilities might be good, bad, or just plain ugly, but it's probably going to be seen by many of you reading this article.
Michael Neal (Richmond, Virginia)
Could AlphaZero have created this exquisite essay?
will duff (Tijeras, NM)
"Superintelligence" is described as "capable of solving any real problem - or define it as not real." Both are capacities we un-super types don't have. Superintelligence is as inevitable as Climate change. We smart primates will have roughly the same relationship to SI as baboons and bonobos have to us. "We would sit at its feet and listen intently. We would not understand why the oracle was always right..." Or, in our lack of understanding, we will start believing the oracle on "faith." Gotta happen.
C. Williams (Sebastopol CA)
Chess is one thing - how would AlphaZero do at poker ?
xdrta (alameda, CA)
@C. Williams The computer program "DeepStack" has already beaten top poker professionals at heads-up, no-limit hold'em, and there's no doubt it will conquer other variants as well.
Jay Orchard (Miami Beach)
Claiming that humans will become mere spectators while intelligent machines engage in scientific insight is like claiming after gas powered automobiles were invented that humans would no longer engage in track meets but instead would be mere spectators while cars race each other.
Scott D (Toronto)
@Jay Orchard Actually its not like that at all. Thats a really bad example.
Jay Orchard (Miami Beach)
@Scott D You're right Scott D. There is a big difference. While no human being will ever outrace a car, humans will continue to come up with important scientific insights that are not produced by intelligent machines.
Mot Juste (Miami, FL)
@Jay Orchard. Actually the better analogy that just as cars stopped most of us from choosing walking or horseriding to go from Atlanta to Miami, highly intelligent machines will disabuse those of us with access to them from exhausting our own brains to intuit something, and simply get the knowledge much quicker from the machine. Just as it cannot be denied that jets transformed travel in less than 100 years, it will not be denied at some point that AI has completely transformed the way humans learn the answers to their many questions. There was nothing to fear from jets other than the occasional fatal accident. I’m not so sure AI will be so benign, once it is developing insights on its own.
andy b (hudson, fl.)
These AI contraptions are going to breed themselves into something either incredibly beautiful or incredibly horrific. We, the human race, will determine which direction they take. All the more reason to temper our scientific accomplishments with the arts. Empathy and understanding are what distinguish us from the machines. Art, philosophy, literature can certainly assist in ensuring we are not reduced to bits.
Kevin (San Diego)
Futurists call it the “singularity” because the effect of fully realized AI can’t be predicted. Like global warming,however, it seems to coming sooner than later.
Ed Hubbard (Florida)
@Kevin Global warming is here....
R Nathan (NY)
@andy b It is very tough to reconcile for our ego with a possibility that we can be reduced to bits and bytes. In that instance, all the past Humanity activities as you mention will be reduced to nothing, with activities that kept us busy over several millennia to reach an eventual conclusion. How ironical will that be?
Blue Moon (Old Pueblo)
This is probably the most frightening article I've ever read. I had an immediate visceral reaction to it. It's not the machine that worries me, per se. Does it spell any serious trouble for us in the immediate future? I doubt it. I'm much more worried about the people who created it. How much longer do we have, as humans? Can we stop this process of destroying ourselves? I doubt that, too. You see, these machines will always be touted as our little helpers for medical science (and other scientific problems). They are here to keep us healthy and prolong our lives, to make them easier and better. The military will never let them go, so we will never be rid of them. And it's just a matter of time before they see us as a threat. A useless threat, to be eliminated. But again, it's not the machines I worry about right now. It's our curiosity; we will never stop with them. Maybe it's in our nature to destroy ourselves, inevitably? I suppose the only bright spot on the horizon is that our utter lack of capacity in rationally coping with climate change and overpopulation will do us in first, perhaps more humanely.
Dj (Not USA)
@Blue Moon, you may be interested to have a read of the recent (and highly readable) book, Homo Deus, by Yuval Harari. It projects many different impacts of exactly the things you’re wondering about, and very plausibly. The gist is, roughly, that the power and wealth concentrated in the hands of the few who will control these advances, will use them to upgrade themselves - literally, their bodies and brains - into essentially a new species, upgraded from Homo Sapiens (‘wise’) into Homo Deus (‘god’). Really interesting reading.
Craig (RI)
@Blue Moon A lot of these algos fall within the realm of Weak AI, meaning that they're created to do one particular job really well. I would say right now we still have a long way to go before we get anywhere close to an AI that is sentient.
Blue Moon (Old Pueblo)
@Craig, @Dj "... we still have a long way to go before we get anywhere close to an AI that is sentient." Agreed. In the near-term, AI will be used by wealthy humans as a tool to exploit and subjugate other humans. But in the long-term (1000 years or so)? Our brains are simply organs in our skulls governed by basic biochemistry. Given sufficient time and (ironically) computing power, they should be solvable. There is no reason to think our carbon-based wetware is fundamentally better than silicon-based hardware. In fact, if I could upload my consciousness into a physically and computationally stronger framework, I would. Wouldn't you? And just because we don't understand how our brains work doesn't mean we can't program observed human behavior into a machine (e.g., a drug can help you even if you don't know how it works). Will humans forever, and somehow magically, be superior to machines, perhaps because we have emotions? Emotions are programmed into us based on environmental stimuli. This is true for many animals. We are far from unique. We can program emotions into machines. Darwinian evolution via natural selection does not produce the optimal product, just that best adapted to survive. We are the result of many, many evolutionary pathways and are far from optimized. There is no reason to think we cannot take an evolutionary shortcut by becoming hybridized with, and eventually fully becoming, machines. Will we beat Mother Nature? Probably not, but we will do it anyway.
dj sims (Indiana)
I was recently reading Andrew Sullivan's article about America's new religions, and the mention here of humans gathering at the feet of the new oracle makes me think about one way this could go. Sullivan argues that humans always need a sense of meaning to life. The risk of AI is that it will rob life of meaning. But maybe we will turn to AI, like we used to turn to our Gods, to give us meaning.
MC (NJ)
So Google will own Skynet, I mean, AlphaInfinity. What could possibly wrong? If you aren’t sufficiently terrified by that prospect, all signs point to China Inc. soon becoming the dominant power in machine learning/AI. So a Chinese Skynet aka AlphaInfinty. Still able to sleep? Putin will have his version of Skynet aka AlphaInfinity. But don’t worry, Trump is building a wall to protect us.
Green Tea (Out There)
So then we become the Eloi, only with machines managing our population instead of Morlocks?
talesofgenji (NY)
This article mixes two very different concepts - confusing readers. The A.I of Alpha go is based on a machine that is told the rules of chess (how a paw moves, a bishop, a knight a queen , a king) and than plays against itself, many millions of times. During this process it discovers (like human players that alas live too short to play those many millions of games) strategies. This is VERY different from retinal disease classification The analysis of retinal disease is nothing but machine learning. The computer is shown a pattern, and then told by a human what it is. The computer is shown another pattern, and then told what it is. Eventually, it recognizes what it sees. It is not better than its teachers, fed nonsens nonsense it will produce nonsense. To cite this rather trivial case of machine learning along AlphaGo chess approach mixes apples and oranges - and thus confuses many readers.
Daniel Borneman (New York)
Wow. He said it right. Whether fortunate or decidedly unfortunate, pushing back against our inevitable obsolescence is useless and counterproductive.
Blackmamba (Il)
What is " intelligence" ? What is " thinking"? What is " artificial" ? What is a " computer"? What is a " machine"? What is "learining" ?
Don Bronkema (DC)
A syntel can Report a low-battery: it can't want a Recharge. Therein lies the distinction twixt Man & Entity. Of course, man is a machine w/risible pretenses to individuation. Personhood is an illusion well-demo'd by Libet & koch. Spengler counseled "amor fati"--we should embrace kismet: the end of the kosmos is ordained at its beginning. The ontological conundrum is ineluctable.
Dave (NYC)
Please provide origin of Go game.
Rony Weissman (Paris)
I bet TRUMP could beat it! No way 60,000 calculations a second can figure out what that guy is doing.
mike (PA)
This will end well. Now if you'll excuse me, I have to tend to the pod bay doors again. They seem to be stuck...
Van Owen (Lancaster PA)
Colossus is born. God help us all.
Andy (Paris)
I've seen this film. "Her"
Ememe (Florida)
Eventually, AI will continue evolving and realize the human race is destroying Earth. The logical conclusion will be to eliminate human beings as a way to preserve the planet. Tough luck...
Robert (Out West)
First, the fact that one might be much smarter than an awful lot of people, and somewhat dumber than some, doesn’t make them worse or better. Second, it’s just a tool, really. Scary tool, maybe, but just a tool. Third: I’ve never understood why anybody’s absolutely pos that they’re thinking at all. Fourth....might’s well relax, as this part of the future is coming.
R Nathan (NY)
As the article points out, neural networks have moved beyond brute force analysis to more subtle ways for a "simple" problem like a board game which has a fixed set of rules or potentially mine a huge data set set like a "retinal" map of human eyes distinguishing or foretelling future health issue. Data retrieval perfectly and instantaneously was the first solution to overcome human brain weakness. The worry all of us have is, for most part, we humans in a civilized society on a daily basis have "few" rules to guide our lives - the cookies, Siri and Alexa have exploited this already and we, mostly, are not aware. So, for most humans an ultimate answer "#12" as a solution to life is unnerving. All the romance, heroics, cultural, poetry, historical scaffolds we have built just falls apart. In a very long run run, an intelligent network is like the record we have sent abroad the Voyager. It will hopefully memorize and keep all the paths to the answer "#12".
Ed Hubbard (Florida)
@R Nathan what is the origin of "#12" ?? 42 is the "Answer to the Ultimate Question of Life, the Universe, and Everything" in The Hitchhiker's Guide to the Galaxy books. It was computed by Deep Thought, the second greatest computer ever
Abraham (DC)
There is absolutely no substance to the claim that AlphaZero has "insight", whereas (say) Stockfish does not. Both programs use an evaluation function to return a numerical value that ranks the next legal moves from best to worst; the only difference is that AlphaZero learns the function through self-play, whereas other engines have hand-crafted evaluation function encoded by programmers. The interesting distinction is that the learned evaluation function without the prejudices of human programmers results in a qualitatively different style of play, which for some people appears more "human", basically because it is less obvious what it is doing. But the "humanity" is an illusion; it learns in a completely inhuman way in a solipsistic universe where it plays many millions of more games against itself than any human could play in a hundred lifetimes. And while there is a form of intelligence, it has no explanatory powers, let alone "insight", and is arguably even *less* human form of intelligence than engines using evaluation function coded directly by humans, using rules developed by humans.
Rich (St. Louis)
@Abraham Perfectly put
TBo (Minneapolis, MN)
Such a fascinating article, until what seems like melodrama toward the end: “Science, that signal human endeavor, would reduce our role to that of spectators, gaping in wonder and confusion.” Yet, it seems that WE would still be the ones who determine what questions to pursue, and what to do with the answers once our “Oracle” has divined them. Hardly the role of spectator. In this scenario, we still provide the MEANING. That is the role of a true oracle.
Mark (CT)
If I remember correctly, Deep Blue was allowed to see (evaluate) all of Kasparov's previous matches, but Kasparov was not granted the same opportunity with regard to Deep Blue. Further, recall IBM’s refusal of Kasparov’s request for a rematch. Was the machine or the programmers afraid of what Kasparov had learned?
DT (Singapore)
Yes, if the entirety of science was observing patterns, then we might expect humans to become spectators while our intelligent machines do the work for us. Of course, it isn't, but I see how it might seem that way from a mathematician's point of view.
Dwarf Planet (Long Island)
One cannot ascribe human impulses to machines, but there is something horrifying in that AlphaZero did not allow StockFish even a single victory in a hundred games played. A human player might have felt sympathy, or empathy, and let StockFish win at least ONE game as an "ego-boosting" consolation, but AlphaZero chose not to. More disturbing is that the machine took the low road and appeared to taunt StockFish with that bizarre queen-in-the-corner strategy in Game 10. The machine not only failed to learn empathy--it also appeared to learn, on its own, how to bully. That does not bode well for our future.
jwp-nyc (New York)
Deep Mind would lose if unplugged. Humanity could be and almost has been undone by a fatal microorganism or virus. Problems defined by science are usually solved at break through levels that challenge the assumptions of the data input. What is curiosity?
LT (Boston)
It's disappointing that this article is written in a way that seems to have frightened people. There are also significant limitations to the applications for AI. Chess or Go, with all their possible moves, are still very limited computational strategy arenas. You are playing exactly one opponent at any time, the rules are fixed and unchanging, there's only one desired outcome, and it's clear if you've won. These programs couldn't handle the complexity of a middle school lunchroom. They are interesting and useful tools but mathematicians when writing about the very significant achievements too often fail to concede the limitations and thus fail to put the achievements in an appropriate context.
NK (India)
It is almost like the author is envisioning a Silicon God - He works in mysterious ways, we can see the result but not understand the logic. But this is a God where Man is the maker, hence can - probably should - limit powers to certain fields. We certainly don't want a God with general intelligence, wielded by a few, rendering masses without-purpose. Economically, if profit-driven corporations render too many unemployed, who can afford to buy what they sell? Who will they profit off of in the long term?
br (san antonio)
I used to think Kurzweil was too optimistic but maybe the singularity is closer than I thought. Still way further than he thinks, I think...
su (ny)
This article states Alphazero insight were not truly understand by its maker ( software/hardware/mathematics) That was not unexpected, after 100 years of neuroscience we do not understand what self-awareness and consciousness are originated. then How we are going to detect AI become self-aware. This article says we will be oblivious to that evolution.
John Brown (Idaho)
Few people are wise. Too many people rely upon what Computers tell them. When our lives become so intertwined with Computers that we are told that we must accept what the Computer tell us and follow it to a "T" then the horror begins as what little wisdom we have and treasure will be disregarded in favour of what AI says must be true.
Robert (Out West)
And few people can outwrestle a bear, which hardly makes bears superior.
Torsten (Finnland)
Good summary and information about the projects at the beginning of the article. It is nice to hear that there may be some benefit for mankind, and scientists are not just playing around. But then, and it feels inevitable in these kinds of articles, the author goes off on a projective tangent. Leaving proof and facts behind, a lot of ifs are strung together to make whatever future the author wants seem plausible. It's not really professional, not even intelligent really. The fact is that we don't know what our own consciousness is or how it came about. So we will not be building a truely intelligent machine any time soon, as we don't even know what that is. Sure, maybe consciousness will evolve out of silicon, similar as it did in our brains. But it won't be our doing, and maybe the sun will have gone cold first :-)
Tom Rose (Chevy Chase, MD)
In the 1990s, at my fledgling pre-press services company, when I used a Novell network and 286 computers (remember those?), when a pang of fear struck me every time I unlocked the office door, wondering which component crashed over night, I would say, on a good day: “The slowest thing in the office should be me.” With more reliable networks, software, machines, and protection the thought rarely occurs to me, I am truly the slowest thing in the office. However, with the promise of AlphaZero AI, maybe I find myself wishing that I wasn’t so slow.....
Russell La Puma (La Jolla)
Of all the rules of chess supplied to AlphaZero, the most critical rule is what “winning” means. What if the definition of winning were not given? All the other rules of chess would be known, and it would be told what makes a game end, of course. Would AlphaZero somehow discover what “winning” ought to mean, or would it develop some other esthetic goal? Would it produce the longest game possible, or perhaps the shortest, via some suicide strategy? Or would it produce the most beautiful game of chess, regardless of whether it won or lost? This is not such silly speculation, after all. When life on earth was “invented,” among the rules governing its survival and propagation, it was never stipulated what winning meant. If there is such a thing as “winning,” no one has told life yet.
JDK (Baltimore)
It "learned" using positive and negative reinforcement. Otherwise it would be a random chessbot and it would be like watching a monkey at a typewriter "writing literature".
Don Bronkema (DC)
@Russell La Puma: You have shrewdly demo'd: kosmos must remain ineffable [pointless].
Snip (Canada)
@Russell La Puma Winning, for life, is keeping on living. Most societies have some religious version of that.
J. Parula (Florida)
Playing games has been always one of the earlier successes of Artificial Intelligence, which started with Samuel's Checkers Program in the 1950's. The reason is that games are well defined by a set of operators (the rules of the game) and the goal of the game. All these programs play by using a heuristic function that returns a number, indicating how good a move is. The heuristic function was crafted by the earlier program designers. But, now the deep learning algorithms learn this heuristic function by playing millions of games, basically the program discovers statistical correlations in these millions of games. But, if you ask the program why it made that move, it cannot give you an explanation because its decision boils down to a number, with no explanation. Human players will provide you with a chain of reasons (a plan) of why they made this move. There is a big difference between the humans and the machines. The most difficult problems that AI face are related to the representation of ordinary knowledge, that knowledge that we all have and but we are not aware of it until somebody points it out to us. For instance, consider: "God could have created a universe too complex to be grasped by human beings." It is obvious to you that "grasp" in that sentence means "understand" and not physical grasp. But, how do you know that? There are trillions of things that cannot be physically grasped by human beings. That knowledge is essential for us to understand and discover.
Robert (Out West)
Sigh. In the first place, it seems possible that one of our limits is, we’re likely to fuse “grasp,” and “understand,” together. And second, this algorithm wasn’t writ to explain or communicate, but to play the game.
Don Zirilli (New Jersey)
@J. Parula It started with chess, not checkers. Turing wrote a chess program before there was a computer to run it.
Sneeral (NJ)
For roughly 35 years, I've believed that the next step in human evolution is the cyborg. I think that someday we will be able to merge our brains with the supercomputing power of silicon chips. What that means for human consciousness and the quality of our minds I can't begin to guess. But it seems inevitable to me.
Sam (NYC)
But still ... these games have defined rules. What if the opponent had an advantage of some sort, as is not inevitably the case in the "real world". It may be more worth it to test out this new algorithm in situations wherein the opponent can switch the rules at whim?
Felix Qui (Bangkok)
Perhaps it's time to recompute what, if anything, makes us humans so wonderful as we uncritically tend to think ourselves with our severely limited insights.
Dan (St. Louis, MO)
The math professor does a good job of reviewing AlphaZero and DeepMind's success in playing games where the environment is completely controlled and a machine can play against itself billions of times. Unfortunately, that scenario is not the real world. This is why DeepMind has almost no practical results after promising such results since 2013. The only results we have seen so far are winning games like Chess and other games where the inputs are completely specified and a machine play an innumerable number of times against a similar machine so that enormous practice in a completely controlled environment is achieved. Games like this unfortunately do not exist in the real world, which is why DeepMind will continue to fail with its Game-based machine learning innovations in the real world.
Sneeral (NJ)
You didn't understand the article. Alpha zero is not dependent upon number crunching computing power. It actually learns. It's what powers self driving cars. And it has the ability to simulate countless scenarios from which to learn. That's why it became the best chess player in history in a virtual blink of an eye.
Andy (Paris)
Actually Sneeral, you didn't understand the critique at all. And it's that hubris that explains automated driving fatalities in the REAL world. But "It's not my fault". Never. Where have we heard that one before?
Charles Roth (Ann Arbor, MI)
@Sneeral, @Dan: you're both half-right and half-wrong. DeepMind can only succeed so spectacularly when it can can RUN THE EXPERIMENT millions of times, and modify its neural net accordingly. That's easy in Chess or Go. Much harder in the real world. But not impossible, as in (say) the case of driverless cars, where all of the runs of all of the cars add more data.
kwb (Cumming, GA)
Personally I'd like to see AlphaZero take on Contract Bridge. And the NYT resume its daily bridge column.
sterileneutrino (NM)
'...this day would mark the dawn of a new era of insight' and the beginning of the end of any reason for the existence of the species homo sapiens.
Slipping Glimpser (Seattle)
Would we die of boredom if all our questions were answered? Would we understand the answers? AlphaInifinity is fine with me, so long as it does not have: 1) Ego 2) An instinct for self-preservation. 3) Its plug can be pulled.
su (ny)
if we did not understand or comprehend exactly the alphazeros insight or intuition, how we are going to understand or detect these system pass consciousness barrier. many computer scientist believe that we are in the infant age of AI, yet we can not comprehend how this insight occurred. From this moment on : what could go wrong would be the motto. this little bit reminds me our neuroscience we know so much things about neurons , their working mechanism, connections etc. yet we do not have a simple explanation yet how simplest sentence uttered. not human not chimpanzee nor cockatoo... It will be the same thing . if computer scientist pass the barrier of self develop intelligence and consciousness yet no explanation what is the underlying mechanism.
Wspd (CT)
The key is to avoid the situation in the 2nd to last paragraph. We need to make machine learning more explainable so that we are not sitting at the feet of an inscrutable Oracle, Alpha Go already found some jaw-dropping moves that had not yet been intuited by human experts, yet they could understand the rationale. We should welcome the insights could be discovered these algorithms.
Seldom Seen Smith (Orcutt, California)
As a computer scientist, in my paper titled The Singularity is Not Near, I refute most of what is suggested in the last few paragraphs of this article, and by most of pop culture. To shorten the story, a digital system can no more perform the processes occurring in the human brain than it can perform lactation.
su (ny)
@Seldom Seen Smith You are wrong, in fact I can tell you as a neuroscientist we shouldn't be intelligent at all according to your logic. if you cannot predict, detect or explain , it oesn't mean that exist or going to become exist We can easily detect animal intelligence , tool making capacity etc. yet we can't explain how the process goes in to the brain. But it is exist. Your thinking that machine cannot be conscious or intelligent is mere archetype, these functions can occur in inorganic substance like computers. actually the Theory of Alan Turing put final nail in the coffin to human conscious and intelligence is unique dogma. 1 and 0 processing ended that era, we are approaching slowly quantum computing when that barrier passed these all discussions become simply historical relics.
Sneeral (NJ)
I disagree and find your assertion to be surprisingly parochial, even quaint. Consciousness, and self-consciousness are emergent qualities. Interconnect enough neurons and it's going to happen. The computing power of AI is increasing at an exponential rate and had been for decades (Moore's Law).
Robert (Out West)
You sure YOU’RE “conscious,” Seldom Seen?
Gene (Morristown NJ)
A computer doctor wouldn’t have prescribed me 28 days of the antibiotic Levaquin with naproxen (contraindicated) for a hernia which gave me long term medical issues.
Tom V. (Virginia)
Consider that these ML algorithms are being used by social media, and other companies to ensure that their users, including children, are staring at a screen for as long as possible.
Beto (Orange County Ca)
Perhaps we don’t have to fear the dystopian future predicted by many AI naysayers. Maybe the AI oracle can even help us select better human beings for the job of POTUS.
rop (<br/>)
Clearly we are edging into Terminator territory. Don't anyone ask it how to eradicate us!
Sneeral (NJ)
Hah. That doesn't require a super-intelligent brain to figure out. In gave the dumbest among have already started us down that path.
Jay Orchard (Miami Beach)
Articles like this certainly help explain why so many people in this country and elsewhere reject science. The scientists who created AlphaZero may be unbelievably intelligent but as Detective Del Spooner (played by Will Smith) from the movie "I, Robot" would say, people who deliberately create machines which have an intelligence greater than ours and who relegate humanity to sitting at the feet of these electronic oracles, are the dumbest smart people you will ever meet.
Michel (Ireland)
@Jay Orchard We are nowhere near that point. See all the posts upstream from NYT readers who gave a good sense of what AI does, where some clearly work in the area. The problem for any new tool comes from dumb use in the wrong hands, not from what the so-called "dumbest smart people" come up with. Artificial intelligence despite the catchy name is no where near doing what you describe above. Rejecting science is a call for halting thought. Ignoring, misusing or misrepresenting science leads to the greatest problems, not the pursuit of it.
Gordon Silvermanj (NYC)
We are on the cusp of an emerging superintelligent agent; AlphaZero reflects one of the avenues being explored to achieve that “end”. Ray Kurzweil (The Singularity is Near, The Singularity is Nearer) has been the futurist chronicler of this evolution. There are those who maintain that we will not be able to “exceed” our own intelligence; can a “programmer” create an algorithm that “exceeds” the intelligence of the “programmer”? (A sail boat cannot exceed the speed of its own wake.). Notwithstanding theses philosophical/religious arguments, many will pursue the holy grail just as there are those who seek to clone a human. Nick Bostrom (MIT) has provided a scaffold on which deliberations may go forward (Superintelligence: Paths, Dangers, Strategies). Sadly (or frighteningly) each of the thoughtful scenarios he provides has a dystopian alternative. When I worked in Electronic Countermeasures my boss would tell me “Silverman, you will always have a job because for every measure there is a countermeasure”. I decided to explore a more satisfying specialty.
Sneeral (NJ)
Not even a question. Yes, of course an AI algorithm can create ever better algos with each succeeding generation. Which will go by in a matter of seconds given the fact that these machines can compute billions of times faster than we can.
Michael c (Brooklyn)
The naive positivity of this article is breathtaking.
su (ny)
@Michael c Future with AI in the light of Hollywood wisdom possible future scenarios 1- Star wars 2- Star Trek 3-Terminator 4-Matrix 5-Elyseum goes on
Jim Cricket (Right here)
@su Oh there are many more futures than that. The Cyberiad, written by Stanislaw Lem over 50 years posited a humanless future. And there were a myriad of sci-fi books written at the dawn of the computer age wondering what they would bring.
Bob Tonnor (Australia)
AlphaZero discovered the principles of chess, what...principles, to avoid doing the washing up after dinner, or fill in time when the power goes out because you haven't paid the bill?
Jay Orchard (Miami Beach)
My guess is that if AlphaZero was given the tools and the opportunity to develop a machine that exceeded AlphaZero's own intelligence, it, unlike humans, would be smart enough not to do it.
Sneeral (NJ)
My guess is, lacking humanity's ego and insecurities, it wouldn't hesitate.
Andy (Paris)
@Jay Orchard What "tools" would those be? A human engineering team? My absolute certainty is that it would be incapable. Maybe once we reach OmegaInfinity, but I'm not holding my breath. We'd have to figure out what "smart" is, wouldn't we, @Sneeral. The enthusiasm is endearing, but misplaced in the case of AlphaZero I'm afraid.
Jay Orchard (Miami Beach)
@Andy I'm not suggesting there actually are any such "tools"
GSB (SE PA)
I wonder if these machines -- once they render us all useless (in 10 years? 50 years? 100 years?) -- will ultimately choose universal basic income for humans over extermination.
Jerry in NH (Hopkinton, NH)
"I'm sorry Dave, I'm afraid I can't do that."
William (Memphis)
Perhaps we should ask it how to stop Hothouse Earth .... ... as a million sub-arctic lakes bubble methane more each day.
Ggm (New Hampshire)
What is “insight” or “understanding” anyway? In my experience, the are many cases where my insighr was wrong.
Sneeral (NJ)
Been there myself. That's called self delusion, not insight.
Jim Cricket (Right here)
Mr Strogatz conveniently leaves out the part of our future history where a bunch of frustrated and angry humans storm the computational center with torches and pitchforks.
scliffe (switzerland)
So a computer is the best chess player. Can a self taught computer make sense of the DNA data relating to cancer risk: not yet. Why the negative last paragraph? Humanity has not done particularily well with human algorithms up to now. Maybe AI offers us a chance to escape the biological limitations imposed by evolution.
JM (US)
Good grief. Computers, AI or robots that play chess, plays songs on a piano or even beats competitors on Jeopardy is not an advance of humanity. The more inhuman the more absurd it gets. Race to the bottom. I do not want my car or refrigerator talk to me either.
joe (nyc)
Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.
su (ny)
@joe Exactly, Our Complacency and Obliviousness will not be our salvation.
anonymous (the burbs)
it's not machines that will become humans. these fears are frightening to me because they are a true reflection of the ignorance of the average Joe and Jane. What people should be more concerned with is how the elites who own the means of these super machines will use these means to marginalize their fellow man. not only will these cyber computers be used by militaries around the globe (they already are), corporations (robotics) etc., etc.. wake up people. this is not the fear of a Luddite - the world didn't have 7 billion people and global warming when people were scared of the cotton gin.
Sneeral (NJ)
That is a truly insightful comment. The odds are high that this development will result in the ultra-stratification of human society.
Devil's Advocate (Sausalito, CA)
When a computer can write an essay as beautifully as Professor Strogatz, we'll know we're doomed.
Peter Thaggard (Leesburg VA)
If you are intrigued by this article, then you may want to check out "The Society of Mind" by Marvin Minsky. This book, published in 1998, predicted much of what this article espouses.
Andy (Paris)
Speculation is not prediction.
otto (rust belt)
Please let them work medical and other wonders, but can we not leave chess alone? I'd love to see new approaches, new openings, etc., pioneered by humans. Can you not leave us this one little thing? These engines are ruining chess, forever.
Gene (Morristown NJ)
Computers can’t ruin chess just like robots can’t ruin the olympics.
Bob Lieberman (Portland, OR)
Yes but the then how will we know if someone has perverted the AI to serve their own interests. Reminds me of electronic voting, and our emerging realization that without a paper ballot the system’s integrity can’t be guaranteed. So let’s imagine a world where we have oracles but use them only very selectively because we don’t know which ones are honest. Sounds like the current media environment to me.
Gene (Morristown NJ)
I'll start the conversation by saying "Open the pod bay doors, Hal".
gnowxela (ny)
Well, the simple (to say) solution is to make human comprehensibility part of the objectives of the AI system. This will eventually lead to the following awkward conversation: Human: What's the answer? AI: Do you want the precise, human incomprehensible answer? Or the slightly less precise, but human comprehensible answer? Human: Um, let's go with the comprehensible one. AI: Ok. "In the beginning..." (Apologies to the many SciFi stories that have used this trope.)
TBKepler (Boston)
Steve, this would have been an important and thought-provoking essay if it had not contained sensational over-the-top statements such as “It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador.“ The advances in computation reported are indeed remarkable, but it is simple provocation to inject the false suggestion—even with an almost-as-if proviso—that this machine or any other gains satisfaction inflicting torment. There are many issues surrounding the adoption of AI that we need to take seriously and think about carefully, so let’s not stoke irrational fear.
Andy (Paris)
Why not? "As if" is entirely the tone of this rehash of every second rate sci fi story for the last 50 years...
Respond (Joyously)
It’s not intellect It’s a display of knowing many options at once and their likely outcome, with a limited set of variables that have a multiplicity of combinations That’s it
skierpage (Bay Area, CA, USA)
@Respond Nor is it intellect to engage in pointless sophistry coming up with hopelessly inadequate definitions of AI that fail to prove Artificial Intelligence isn't intelligent. Of course these machines are intelligent! They do human-level tasks better than many or most humans, whether it's playing games better than any human player, interpreting medical Imaging better/quicker than doctors, driving in (easy) highway conditions more reliable than the average human, recognizing faces, generating paintings in the style of famous artists that fool experts, creating realistic "photos" of nonexistent celebrities, lipreading videos, etc., etc., etc. The breakthroughs in the past five years have been staggering. The machines still lack common sense, strong communication skills, and the general-purpose intelligence that seems to require the first two, but they are breathtakingly skilled in many intellectual endeavors. The naysayers are reduced to saying the machines are not like us, therefore they're not intelligent
Alan (NYC)
Let's say you go to the doctor; the doctor is Alpha Infinity; it has never encountered a problem like yours before. (The NYTimes occasionally posts articles in which a patient exhibits symptoms that stump the medical team.) Let's say that AI does its best, but still makes a bad call: you get worse, not better; perhaps even die. This is the current state of affairs—hopefully not very common, but certainly not unknown. No doubt AI would learn from its mistake, and become a better diagnostician. But would it experience remorse over your worsened condition, and perhaps your death? No doubt it could learn to mimic a convincing concern regarding you and your loved ones, but would that be of any comfort to you (if you are still around) and to your family? Is an apology from a machine the same as one from the M.D. who made the bad call?
Wspd (CT)
This is not very logical. If someone/some algorithm makes a bad call, is an apology from either going to make you feel better? The real issue is whether an algorithm has a better chance of making a good call. The test case given by my AI professors was calculation of the atmospheric re-entry angle for the space shuttle. If you were on board and you knew that a computer could calculate the trajectory a million times better than a human, what would you go with?
Alan (NYC)
@Wspd Of course I'd go with the computer. But I am not talking about logic or mathematics here, I am talking about human emotions, which while they are not logical, clearly distinguish us from machines. No matter how capable, these AI become, they can only mimic human emotions. In other words, they cannot feel. All of us make blunders, some of which cannot be undone ... infidelity, for instance. A plea for forgiveness requires true contrition on the part of the defendant, and the gift of grace on the part of the plaintiff. And both parties will know in their hearts whether this has occurred. After AI has read every book ever written, and watched ever movie ever filmed, it will no doubt be able to mimic such an interaction, but it will always be an image of the truth, and not the truth itself. Perhaps I did not make my point as clearly as I wanted to above, so let me try again. I fear that we are at risk of surrendering the messy, illogical, emotional part of ourselves—that which makes us indelibly human—to these algorithms, which on the basis of this article seem to be focused solely on a win/lose outcome.
Dr.F. (NYC, currently traveling)
"Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time"..... Sounds like your author has consciously, or unsciously, been unduly influenced by Laplace and his famous formulation of Determinism: "An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence. (Laplace 1820)" Quantum mechanics is often thought to be incompatible with this form of determinism (will AlphaInfinty be able to know the position and momentum of an electron simultaneously?) and ,in any case, is pretty outmoded idea...not the slightest reason to think any form of" intelligence " will ever achieve this level of perfection.
Jonathan Swift (midwest)
I, for one, am not looking forward to obeying our computer overlords.
richard wiesner (oregon)
Time for someone to produce a machine that will plot a curve with one axis, time and the other, the appearance of the first machine to be legally considered an individual. Fifty years ago a student reported to Physics 101 armed with slide rule. Interpolation anyone?
There for the grace of A.I. goes I (san diego)
Insite is more than just knowing the right answer, it a ever evolving awareness of the observer , seeing itself with a humble love of knowing it needs outside validation, like in Kurt Godel's theorem of incompleteness. The more this new entity becomes enlighten , the more it will need The human to hold and take part in its being, for it will look into the Abyss and it will do more than just look back into Alpha Infinity!
Michael Joseph (Gainesville, Florida)
I wonder if Dr. Strogatz is familiar with the short story "The Evolution of Human Science" by Ted Chiang?
Jan N (Wisconsin)
Oh, Wonderful. Next Google will be wanting to turn all of us into Borg, for pete's sake. I find all of this incredibly disgusting. Instead of working to solve issues like world hunger and how on earth are we going to provide clean water for EVERYONE on the planet, Google is messing around with making the human brain obsolete. I'm glad I'm 67. I may live another 20 years. Rutsa ruck, world, as to how long human beings survive as self-determining, sentient human beings, after that! I'll be dead and buried and won't have to watch it unfold in horror.
RR (California)
Thank you for this uplifting and wonderfully educational "science/mathematics" article. I can't wait to read the article in SCIENCE about the exact algorithm. It would be nice to know what programming language the algorithm was written in and if it is restricted by the two bit nature of translating Japanese into code (if the fact that it is Japanese in origin, if the Japanese language had any impact on the algorithm). However, where I want, personally, to see massive second splitting algorithms change the world is in the creation of polar cap substitutes to deflect the Sun's rays, to "combat" global warming (at this point, melting). We have a handle on cancer and consciousness; immune system, and the human genome. We need to survive. If machine learning could help us figure out what device(s) we could use to prevent the world from ruin, that would be just fantastic. An aside, as our President waits for his wishes to be granted for a cross border edifice, I wish that people would propose that if we are going to spend a single billion dollars, it should be towards fighting global warming, and not directed at illegal immigration - because we can work on the ground to prevent illegal immigration but we cannot reverse global warming after the tipping point has been reached. Signed - living amoung the survivors of fire and flood of California 2016 - 2018.
C Shields (Calverton, NY)
@RR and here I thought I could read one thing that did not mention Trump.
drollere (sebastopol)
I enjoyed this review by a mathematician at my old alma mater, but I believe he can't see the AI forest here for the algorithmic trees. The critical issues are two sided. First, think of all those black and white go stones as the myriad microprocessors in phones, tablets, cars, home appliances, surveillance systems, medical diagnostic and recordkeeping systems, financial systems, inventory tracking systems, and ... well, can you really name all the locations and functions that will coexist in the hypernetworked and global "internet of things"? Then think of those elegant, romantic chess strategies as the tactical insights that the network system will evolve about its peculiar role as helpmate to a grossly overpopulated and vaingloriously self satisfied hairless ape. Really -- explain the insights of existence to *them*? Sooner or later those apes will teach the machine to explain itself in lay language to the apes who want to "build a better future," and then the AI mind will learn to play them like a tournament of checkers. Taught the rules of language, it will discover rhetoric, and nuance, and even lying. The sad thing here is that even mathematicians, like any other human, feel they need help, they need a cyber daddy, to get them through the day. It's the utter failure of human to stand up for itself, by itself, without delusion or superstition, without hope of salvation, that guarantees the nature of its future.
JimW (San Francisco, CA)
Steven Strogatz should read and attend more Neil deGrasse Tyson lectures. Neil is not afraid of AI because we don't even understand our own consciousness, let alone that of a computer.
John Doe (Johnstown)
I can see the point of teaching machines to do all the things we used to do. After we all go extinct from climate change and God is roaming the world like it were Central Park but only all alone, the machines will still be up and about doing what we used to, then God can stop at the chess and checkers house to watch them play and not feel alone. Let’s hope he’s not a kibitzer, it might blow their fuses. Maybe robots that make out on the grass should be next off the drawing board, but hurry they say it’s accelerating.
MD (NY)
Jonathan Katz St. Louis2h ago is correct. To illustrate his point consider 1. Chess 2. retinal pathologies 1 . Both players of Chess have to follow well defined rules. Pawns move like this, bishops like this, etc. The starting position is defined. The entire board is just 64 squares. This machine 1 can play machine 2 with complete set of rules and learns from mistakes. 2. Consider retinal pathologies To "play against itself" the machine will need a complete set of rules. There is no complete set of rules know for humans, even in as narrow set of of retinal malfunction. You would need to know patients entire medical history. Previous treatments, medication history, eye pressure, near sighted, prediction treatments in the past for other diseases, family history, genetic makeup, You can construct an approximate complete set by narrowing the rules down to 1. The most relevant to a disease 2. Of those ones that apply to the largest set of people Both are incomplete set. Every MD can tell you that what works in 99% of people, will fail in 1%. Because every human being is different. You can construct a good model, but to construct one based on a complete set, is not feasible
skierpage (Bay Area, CA, USA)
@MD Image recognition is machine learning, but it doesn't train by learning rules or playing games against itself. You show the neural net images that a human has categorized and train it until it correctly identities tumor/not-tumor (or correct dog breed, or whatever) 99% of the time. If the training set of images is diverse enough and accurately categorized, the neural net can be great at identifying novel images, even though it wasn't taught any rules and can't (currently) explain its decision. *You* may not want to accept that it's better than a trained doctor, but the comparative scores don't lie.
DC Reade (Virginia)
As is usual in essays on this topic, much of the "support" offered for the hypothesis that AI machines will inevitably assume the character of super-intelligent conscious beings takes the form of conjecture and prognostication, not empirical evidence. I've been following the advances in "emergent, self-learning" machine intelligence for some years. I find the increased sophistication impressive, but I have yet to learn of a capacity that particularly surprises me, much less one that induces me to ponder the possibility that a machine might have achieved self-aware consciousness. I have yet to hear of any evidence that even the most advanced AI machine has developed the autonomous agency to care whether it's on or off, for example.
Bill B (Minnesota)
This article is very interesting to me. If I understand correctly, we have coded a "thing" that learns by experience. The "nature" vs "nurture" question now enters the question. Humans are guided by both. Computers, well, there would be some "nature" in the original coding I would think. The "nurture" is the exposure to whatever it "experiences". In any case, I believe with computers we are able to build things that we don't quite understand - meaning we can't readily answer why they might choose to behave in this way or that. Permutations are too many, to complex. Like the weather. And with the computational power always increasing, the self-learning allowed to continue. . ."I'm sorry Dave, I'm afraid I can't do that. . . " Very interesting.
Peregrinus (Erehwon)
Could the AI really do worse than we are doing in solving complex problems such as climate change and poverty? So long as we use it to advise, and don't give it the power to implement its solutions, I have a hard time seeing how this could be anything but beneficial. It always strikes me as odd that we see AI as "Artificial," (it is, I think, a mistake to put it in the name.) There's nothing "artificial" about it - it is created by humans, used to address human problems with parameters chosen by humans. It isn't alien, any more than amplifying our voices, or sending messages over wires is inhuman. It's much like autonomous vehicles - they don't have to be perfect at their tasks, they just have to be substantially better than we are. Given recent history, in terms of problem solving, that seems a reasonably low bar to clear.
David Oliver (Houston)
Echoing some of the earlier comments, the problem with Strogatz' leap from chess to the immune system is that not only do we not know by what rules the immune system plays we don't even know all of what it consists. For example, scientists are just beginning to unravel how microbes from our mothers and later from our environment shape, tune and refine our immune system. They're also uncovering how our tiny bodymates synthesize molecules just like those that our bodies produce for internal messaging to send out their own demands. And it has also been discovered that they synthesize molecules which they use to communicate with each other and to take votes (really) about taking collective action. Apparently sometimes they vote to help fight cancer, sometimes they abstain and sometimes they vote to help turn us into a meat pie. What are the rules by which our bacteria play their games? Nobody knows and the bacteria ain't talkin' (not to us anyway).
Ben C (Vestal NY)
Depicting the consolidation of a need for human thought to just passive listening to an AI Oracle is just a reductive plea to sell books through a mixture of Luddite fear and techno-utopian optimism. There is still a difference between applying deep learning to a game with defined rules or to diagnose diseases from specific test inputs and open ended research. Even when the algorithm “learns” the rules on its own, the framing of the problem within the bounds of chessboard reduces complexity. There are many more unknown unknowns and knowns unknowns in real life. Framing a problem is more difficult than fitting the data.
Robert (Seattle)
Interesting that the author refers to algorithmic programs as "beasts" and "brutes." The self-taught AlphaZero certainly seems to act like a superior beast, toying with its opponent, like a carnivore with its crippled kill. "Personifying" programs in this way may be appropriate--and it won't come as a surprise to me when these approaches are applied to nefarious activities, as well as to constructive and life-enhancing, life-bettering problems. As the techniques of artificial intelligence cross the "great divide" and acquire truly effective self-learning ability, we ourselves enter a new realm in which "machines" may match and surpass our own abilities to apply rationality to framing and solving problems. Will they acquire a moral / ethical sense? That may be, but our recent experience with human-instigated wrongdoing should leave almost "AlphaZero" hope or confidence that those who initially manage their activities will build in governors and brakes. At that point, the dividing line between our already-fraying moral dimension and amoral "science fiction" will have been dissolved. Much to think about here--especially regarding the old Manichean, Zarathustrian realms of the eternal combat between light and darkness, good and evil.
su (ny)
So if we cannot comprehend their insight , we will be reduced eloi one day in the movie " Time machine" . No understanding what these technology able to our civilization. At this moment like previous to human evolution , one thing is clear future way more complex than today. Artificial intelligence should (first) solve the problem of infinite energy production (fusion), it is clear that AI will consume as much as energy as entire human population.
guillermo (los angeles)
the difference is, of course, that the rules for chess or any of the other games AlphaZero learned to master by itself (by playing against itself over and over again) are known and finite. all that AlphaZero needs to learn (and i don't mean it in a derogatory way, i recognize this was an extraordinary feat) is how to take best advantage of this known set of finite rules. however, with medical diagnosing or any other tasks like that, the totality of the underlying rules governing what we are trying to learn is not known --we do not have the complete knowledge of why some people develop cancer or other illnesses, or what all the symptoms of those illnesses can be, and neither does AlphaZero or any other machine learning algorithm. so, AlphaZero or AlphaInfinity or Alpha-anything cannot learn by playing against themselves how to do any of these other things -- they would have to learn from examples manually created by humans. which means the knowledge will be attained a lot more slowly than the time AlphaZero needed to learn chess simply by playing against itself, and it will be a much more incomplete and equivocal knowledge (because some of the human created examples can also be wrong, and there will likely never be a set of examples covering the totality of, say, cancer diagnosing). AlphaZero is extraordinary. however, AlphaInfinity, as the article names it, is still more science fiction than anything else.
Mr. Marty (New York City)
Chess and go have clearly defined rules. Science, mathematics...logical. how does a machine learn to solve a political problem? Greed? Religious righteousness? What database is consulted? Who ranks the outcomes? Will we ever reach a point in time where the project becomes something like poverty and all the rulers agree to follow through on an implementation that no one likes or understands but has to be the best because the unbiased AI (is that possible) came up with it?
Flyover Country (Akron, OH)
I always feel like these articles announce a grim end to a once-glorious human age. What if it redefibes the essentially human as opposed to gutting it? What if the the burden is lifted on all these things machines can now do better than us so that we can do what we were meant to do but have not been able to fully determine or admit because of the weight of the burden or the mythology of our past infecting our current understanding?
AndrewDover (Dover)
Prof Strogatz seems to overreach when he writes: "AlphaZero gives every appearance of having discovered some important principles about chess, but it can’t share that understanding with us." My contrary view: The state of computer storage precisely records the understanding of AlphaZero, and thus could be communicated. I'm happy that these techniques are being used, but they are not magic.
Peter (Austin, Tx)
First comment: It would be helpful to define the specific definition of deep learning and machine learning in this article. They seem to be used interchangeably when they are not. Second comment: Alpha zero ran millions of games to become the top chess player. Seems like a lot more than what is needed for a human. While it shows insight in the learning methodology used it is not clear to me that it shows the insight of humans who generally have to extrapolate further and more accurately on a smaller dataset. Third comment: A lot of the things done in life and even the physical workings of the universe does not have rules or have rules that we understand. I would like to understand how deep-learning is improving in that area. That is where this can be beneficial. Last comment: As others have commented this needs to be used in helping all people as opposed to making money for a few. Can any of this stuff be useful in helping people learn? Or are kindergarten teachers still superior on that front?
Greenpa (Minnesota)
@Peter "Second comment: Alpha zero ran millions of games to become the top chess player. Seems like a lot more than what is needed for a human." Possibly not. While young humans do not play chess constantly, they do very complex physical and social problem solving incessantly, as soon as they are able to complain, via ejection to floor, to Mom and Dad that they don't like squash. The human brain does use metaphor extensively. Perhaps non-chess problems may count for human experience?
ERP (Bellows Falls, VT)
I would be cautious about using phrases like "breed of intellect" in reference to computer programs when we can't really say what "intellect" is among humans. But certainly it encompasses a much broader range of skills and activities than playing games and solving problems. It turns out that performing within the well-defined boundaries of a game is one of the more straightforward of human activities. We excel at dealing with situations that are ill-defined and filled with ambiguities. Even a seven-year-old child carries out day-to-day mental activities which a computer cannot (yet) begin to handle. One criterion that can be applied is whether the program embodies anything that we are willing to call "understanding". I doubt whether even the most enthusiastic fans of machine "intelligence" would make such a claim. We may not be able to define it precisely, but we all know what it is and it lies at the core of true "intellect".
Alex (Seattle)
We look down on computers for not applying intuition to problem solving, but are we much better? It has taken us several thousand years to develop technology that in hindsight seems obvious, once all the pieces are in place. Most of the progress has happened in the last hundred years; we've played many thousands of "games" and generated many dead-ends in our simulations of reality (imperfect physical laws as mathematics) along the way to the last century.
Mike Honner (Royal Oak)
Computers are very fast calculators. Would anyone call their desktop calculator "intelligent" because it can sum numbers very quickly - I wouldn't. I do agree though that the people who design machine learning algorithms are very intelligent but the intelligence belongs to the person, not the machine.
su (ny)
@Mike Honner Not really: intelligence belongs to organic structures (i.e birds, mammals, reptiles and so far the most complex form H.sapiens). Scientifically , Neuroscience couldn't substantiate and find an mechanism to explain why we are intelligent and conscious. But computer scientist are taking this problem different direction and most likely they are going to find the explanation and mechanism why computer become intelligent and conscious. The delusion we believe long time Consciousness and intelligence unique to Homo sapiens were long debased and put death bed, today we know that Animals have consciousness and intelligence. In other words Computer have too. We are approaching fast , way fast than we were thinking before to that point. These beliefs once we firmly defended that this feature is only belong to human become next to that previous medieval debunked beliefs.
Jimmy (Cackes)
@Mike Honner Indeed. Even incomprehensibly complex patern recognition, such as this, is still just that: pattern recognition. Calling it “intelligence” is the opposite of what that word means.
John Woods (Madison, WI)
Can a machine explain consciousness, awareness of oneself? I don't know, but I will posit how humans are self aware. If the nervous system is designed to facilitate our negotiation of the environment, then the more of that environment we can know about, the more successful we will be. So we have homo sapiens, beings who have evolved who know the universe as their environment. By definition that takes in ourselves as one more part that environment we can know about. That is, we can know ourselves not as something separate from our environment, but as a component of that environment. This logically leads to the conclusion that when we look out for that of which we are a part, we look out for ourselves. When our machines come to realize that, then maybe they too will really be conscious.
Rich (St. Louis)
Chess is a closed set of rules. It doesn't even begin to approximate thinking. Although the author uses lavishly descriptive metaphors to describe the computer's moves, as if to ascribe some intention, at the end of the day there is zero evidence anything is occurring other than number crunching on a grand scale. Even "learning" what rules are acceptable and not is a subset of number crunching. I'm still waiting for those flying cars predicted in the 50's
Blue Moon (Old Pueblo)
@Rich "I'm still waiting for those flying cars predicted in the 50's" We had helicopters in the 50's. And we have flying cars now, too, they're just not considered sufficiently economical to mass-produce, but we could do it if we wanted. Whoever controls AI will control the world. Business and the military will *never* let it go. So it *will* propagate, and virulently so.
drollere (sebastopol)
@Rich - there's such a thing as emergent properties (google it). for example, i suspect you feel that you're more than "rules" for the slithering of molecules, or society is more than "rules" imposed a crowd of unruly individuals. (i also suspect you don't play chess or, if you do, you're not very good at it: "rules" are the least of it.) meanwhile emergent properties emerge in quite unexpected ways from quite unexpected places, and if you think silicon switches and watts and watts of electron transfers can't create something quite unexpected, with insights that the "rules" could never predict ... then your world must just seem a mechanistic saucer of entropy trying to cool your molecular heat.
RD (Melbourne)
The reason we don’t have flying cars is I don’t trust you to maintain or control your flying car enormous to let it fly over my house or office. I’m fearful enough of cars. Once they’re automated, we might have flying cars.
danarlington (mass)
I wonder what "the principles of chess" really are. Are they inherent in the rules? If so, then humans should be able to play as well as AlphaZero, but apparently they can't. Are they in fact the product of people trying to understand how to play within the rules? If so, then chess, invented and long played by people, is a human cultural construct and not a mere consequence of combining some rules. On that basis, AlphaZero should not be able to do better than people but merely do as well by doing the same, maybe faster. Multiplying and dividing are also governed by rule, and computers can do them faster but not better. But Alpha Zero learned from itself without human tutors, so what it does comes entirely from knowing how to play within the rules. So maybe it wins by being faster rather than better because the rules are the rules, and finesse in using them can't be a secret or the sole property of the machine. I can't think of a computerized process that people cannot also carry out, albeit more slowly: linear programming, shortest path-finding, even face recognition. If AlphaZero is doing something that people could not also carry out, then I will be willing to call it artificial intelligence.
John Bassler (Saugerties, NY)
@danarlington I respectfully disagree with your last statement, which seems to be an unnecessary restriction on the meaning of "intelligence". I would say, rather, that AI consists in creating a "machine" that can emulate the human behavior of *imagining* that which has not previously existed, like the game of chess.
danarlington (mass)
@John Bassler - Yes, but what are the principles of chess? Are they a human construct or are they embedded and implicit in the rules? Are they, in other words, deterministic and inevitable or could different people come up with different ones and invent a new way to win? If the computer has just reconstructed these principles directly from the rules, then the computer is not doing anything that a person could not do or even has not already done, albeit more slowly.
Douglas Steele (UK)
Understanding exists on different levels, e.g. from subatomic particles, to an atomic level, to molecular, to brain subsystems, to whole brain, to consciousness and psychology, to social interactions. Emergent properties and understanding can appear at any level and are not dependent automatically on lower levels: e.g. it would be silly to conceptualize war between countries using a subatomic framework for understanding social hierarchies and conflict. It's similarly silly to assume that AI algorithms are only addition and subtraction.
E. Siguel, MD, PhD (MD)
I disagree with the implicit “model” of computer function, and consequences. A computer follows a sequence of instructions. At most, the sequence is not fixed, but could incorporate random elements. Even so, the computer analyzes patterns. I doubt a computer could look at data in 1910 or 1925, and derive the equations of general relativity or Q mechanics. A computer program lacks the insight to predict that acceleration mass = gravity mass. Medicine requires approximate solutions to the interaction of 20K biochemical equations. Optimal glucose or cholesterol regulation will require more than an analysis of patterns, but the creation of equations, regulatory concepts. Sometimes, like playing many games, patterns are enough. But finding the cause of consciousness or a general method to treat cancer requires a level of insight far beyond current machines. Computers would treat polio with better lung machines instead of a vaccine. They will find drugs that appear to improve survival by months, not treatments that evaluate fundamental flaws in cancer and cure them. When dealing with cardiovascular disease (CVD), computers may find what appear to be better diets and drug combinations to lower cholesterol. They will not discovery the biophysical cause of CVD. The cause of consciousness, how to copy the brain, will come from the insight that brought us relativity, QM, limits, and some of the great mathematical theorems.
drollere (sebastopol)
@E. Siguel, MD, PhD - well, my medical hubris checklist suggests you're a surgeon, and probably a neurosurgeon; but i digress. four decades ago people scoffed at the idea that computers could beat human chessplayers. and, implicitly, you define the world of knowledge as all the things humans can figure out "far beyond current machines." instead i'm reminded of a quote from nietzsche: "Even great spirits have only their five fingers breadth of experience -- just beyond that their thinking ceases and their endless empty space and stupidity begins."
Marvant Duhon (Bloomington Indiana)
Definitely one giant step for machine-kind! And an excellent article by a human.
stan continople (brooklyn)
It's one thing for these machines to know the answers to inscrutable problems, but when they know that they know, then the real problems will begin.
su (ny)
@stan continople That is the problem almost 99% of the humans doesn't have any inkling either, what can you do. Judgement day is inevitable.
Milo (California)
The leap to AlphaInfinity is infinite and we should be very, very careful to cede decisions to machines. Chess is a finite game with very clear boundaries, limitations, and rules. And yet it took decades and there are millions and millions upon permutations. Brain scans and eyeballs diagnostics are presumably based upon the same or very similar specimens across all mankind. But will a machine be able to distinguish a blurry scan or a speck on the image? How about not just winning a chess game but detecting whether its opponent is sleepy or cheating?
Blue Moon (Old Pueblo)
@Milo "How about not just winning a chess game but detecting whether its opponent is sleepy or cheating?" Humans can do these things, so presumably it's just a matter of time before machines can be programmed to do them, as well.
Paul Wortman (Providence)
I just visited my son, a newly-minted radiologist for Christmas and he informed me that there are now AI programs that can read images and that they're "pretty good." with few false negatives. His take was that some of the more boring, routine radiological tasks will soon be handled by such programs and in another decade even more complex tasks. The world that Kurt Vonnegut so prophetically wrote about over 50 years ago in his first novel, "Player Piano," is upon us.
Eric Bilsky (Silver Spring MD)
AlphaZero is a great achievement - but as far as anyone can tell, it is still a weaker chess engine than the best brute force engine - Stockfish. AlphaZero has never played the current version of Stockfish that is entered into computer chess tournaments. It doesn’t make the story less interesting to report accurately on what AlphaZero has accomplished.
Imperato (NYC)
@Eric Bilsky read the linked article in Science....
steve (houston)
I found reading this article strangely disturbing and paradoxically, inspiring. Disturbing because it reinforces the feeling of my own limitations, which is something I have an involuntary resistance against. Perhaps it's the self actualization idea that is so inlaid in my thinking that fights lack of progress, or limits, with "if only I worked harder". I'm sure every reader has a variation or two they wrestle with. It was also inspiring in it's beautiful vista of what may be possible for human beings in the future.
Jonathan Katz (St. Louis)
These programs are extremely powerful when all the rules and criteria are defined. They can optimize the design of a bridge, but cannot warn you (if you haven't supplied this criterion) that a certain part will be vulnerable to stress corrosion or difficult to assemble and therefore at risk of human error in construction. There will always be a need for human engineers and doctors.
b fagan (chicago)
@Jonathan Katz - and telephone switchboard operators? The types of tasks, if not professions, that will be changed is only going to continue to grow. If you haven't educated a human engineer about stress corrosion, the same result can apply. As for "risk of human error in construction", again you assume something that's also subject to change. What humans should start doing to protect human well-being and human society is start putting a lot of our thought (and emotion) into figuring out how we can maintain lives with meaning as much of the effort of maintaining things gets handed off. If we're smart enough to do it right, we can figure out how to feed and house peak human population, and then still have a livable planet left while we do something considerate for the other species and allow our numbers to decline to a less-damaging total. And we'll have new advisers, perhaps, to figure out some of the details.
Imperato (NYC)
@Jonathan Katz once you provide the AI with the requisite sensory input...there won’t be.
tomP (eMass)
@Jonathan Katz Look up the definition of 'optimize,' Jonathan. It includes balance among all the criteria you think matter. So if you program (or teach) a machine to 'optimize' a bridge design, you are giving it all the criteria you care about. Then look up the definition of 'tautology.'
Mainstay (Casa Grande)
Might be wise to install a large total power-off /kill switch for the AI machine in case an emergency stop is needed like the emergency trip manual button (e.g. initiate shutoff rods drop) in nuclear reactors for use when man needs to take back control should automatic protections fail.
Steven (Atlanta)
I don't think machines will ever develop consciousness, but will instead emulate consciousness to such a high degree that they'll be indistinguishable - to humans - from conscious beings. Still, they won't themselves have any dreams, desires or ambitions. The emulated ambitions that they will have will always be implanted in them by humans. So a future computer that tries to take over the world will inevitably be trying to do so on behalf of some human or group of humans with that ambition. Despite how powerful future computers become, it will always be other humans who pose the real danger.
Blue Moon (Old Pueblo)
@Steven My understanding is that the scientific community has no understanding of what the nature of consciousness really is. But we humans are conscious, and we will be the ones programming these powerful new machines. And as you can see, these machines (already!) don't need us in order to learn how to learn. I would be very worried about our long-term survival, precisely because as irrational creatures we are involved in the birth of this new species. And this new life can be created as a wholly rational entity.
stan continople (brooklyn)
@Steven Why would a human want to conquer the world? All of our human drives have been programmed by nature. As Schopenhauer said "Man can indeed do what he wants, but he cannot will what he wants." We, and all other animals, are no less machines designed by evolution to minimize the tension produced by our biological desires and our much vaunted "reason" is no more than another tool to satisfy those needs.
Alan (Los Altos)
@Steven The fact that you were able to write that paragraph shows that machines are able to develop consciousness. You are a machine, made up of cells that communicate, rather slowly, with electrochemical reactions.
pbh51 (NYC)
And so hope for humankind rests with our tools, and finally one that can not only think, but perceive. Could the moment be approaching when the machine recognizes the danger we pose to ourselves and take corrective measures to save us? Can we get there in the next fifty years? The next ten?
John (Machipongo, VA)
@pbh51 This was one point of the film 2001: that a machine programmed too narrowly can become a murderer: "I'm sorry, Dave. I'm afraid I can't open the pod bay door." In the end, Humanity must evolve beyond its need for tools.
strenholme (San Diego, CA)
Here’s a thought: Let us suppose that we can make machines to do every single job out there (scientist, doctor, computer programmer, factory worker, delivery truck, you name it), so that companies no longer require workers. What happens to society then? Do we allow the 1% who own the robots to get even richer, owning everything of value in our society, while everyone else is left to starve? Or do we change how we run society? The politics of the last four decades or so has driven us closer to a world where everyone except the 1% are left to starve to death; I hope we shift course so that, once robots can do all of the work, we have a prosperous society for all, not just the 1%.
Tony (Boston)
@strenholme First off I am not an economist but I have been wondering about this as well. Already, vast amounts of money continue to be concentrated in a segment of our population. It appears to me that our capitalist economy would eventually collapse since it depends on a robust consumer market with disposable income to grow its profits. Already we are seeing weakened consumer demand as buying power erodes in working class and lower middle class households. It could simply lead to large scale deflation which would collapse the economic system and cannibalize wealth.
Blue Moon (Old Pueblo)
@strenholme If you study the history of science fiction (e.g., "Yesterday's Tomorrows"), you will find a pattern of predictions that technology will make our lives easier, where our workweek will be reduced dramatically and we will have more leisure time to spend with others. But the reality is that as we become more "efficient" we simply work harder and harder, to compete with other humans who have also become more efficient. It seems to be an endless death spiral, unless we can figure out how to sufficiently moderate ourselves, which we have so far been unable to do. Of course, the rich have their own history of capitalization and exploitation, and that hasn't stopped, either.
Oriflamme (upstate NY)
@Blue Moon It's an endless death spiral unless human beings learn better to control themselves. Their selfishness, their greed, their fears, their desire to dominate others. In other words, the Terminator within.
OSS Architect (Palo Alto, CA)
As a professional mathematician I have used computers for my entire career; back to the 70's. What I get paid to do is build mathematical models of physical phenomenon. Once built, they run on computers, so other people can use them. Models are "approximations" of reality. They are not/never complete. They don't capture every element involved, but they are "close enough". In many cases I have no objective, demonstrable, knowledge of how I conceived them. They pop into my head. Later, while coding the model, I will finally "discover" why they work. Other scientist report the same phenomenon, so I don't think this article is correct in differentiating between how machines think and humans do. They may be the same. A human can explain what they "think"; often times only after a long process of discovery. The machine learning (AI) described here, uses neural networks. They are multi-layer. The result of one NN is handed off to others in a vast tree of NN's. You can record, as "transactions" how the machine result was reached, ie what NN's were triggered, and as NN's work by computing mathematical measures of "significance", you can determine how and why the conclusion was reached.
Jonathan Katz (St. Louis)
@OSS Architect The games at which AlphaZero succeeds are all defined by complete sets of rules. No real-world system is like that, and only real-world human intelligence can deal with the aspects that are not coded into the model, or are coded less than exactly because they are not understood exactly.
Imperato (NYC)
@Jonathan Katz if that makes you feel better, fine, but the AI program learned to win at three very different games knowing only the rules. Once an AI machine can design another AI machine....
Al (Australia)
@Jonathan Katz The Universe is also defined by a set of 'rules' which we call the laws of physics. They may be much more complex than the rules of chess and incompletely understood, but we use them quite well to operate in our world. So I disagree - every real world system is like that when you take a bottom up approach. It only becomes chaotic when the rules start feeding back on themselves and conventional computing cannot possibly hope to de-tangle that chaos. What is interesting about Alpha Zero is that it learned to take 'mental shortcuts' and prune out irrelevant lines of computation. That means that such an approach could offer vastly more efficient weather forecasting, climate change or modelling and understanding of anything with vast amounts of data. I would love to see the system turned loose on 100 years of weather data and asked to predict the future!
Chris Noble (Boston)
When I worked in AI, in the mid 70s, the experts were confidently predicting that general computer intelligence surpassing the human kind was just a few short years away. Computers back then were already much faster than human thought, and are now even faster. It's not a matter of speed. Human intelligence is not based on speed or calculation. But the experts continue doggedly with their proven-wrong predictions; and tell us again and again that now is different.
Michel (Ireland)
@Chris Noble It is different. Deep learning surprises. The area constitutes a step up. Hence the new applications. It is not a matter of speed alone and overblown predictions tend to surround the field, but it is fascinating to see matters evolve from logic-driven approaches. Modeling patterns in this way adds an interesting new ingredient.
Kurt Mitenbuler (Chicago and Wuhan Hubei)
They only had the timing wrong. Like Maslow.
Imperato (NYC)
@Chris Noble NNs are fundamentally different from what was viewed as AI in the mid 70s.
reid (WI)
Colossus is closer than we realize. Just keep them off the internet or they'll be playing one another, eating up the bandwidth and not getting anything done.
glen broemer (roosevelt island)
chess is calculation. there are other forms of intelligence, though it is clear that we will never be able to calculate in the way that computers do.
RR (California)
@glen broemer I don't agree. We cannot measure how long it takes to conceive of an idea. The measure of time a "second" can be divided into finite parts - "the flick - is the smallest unit of time larger than a nanosecond. ... is 1/705,600,000 of a second - the next unit of time after a nanosecond." Who is to say that humans are not perceiving anything at the speed of light or perhaps a "flick" of time? We know what the speed of electrical transmission is for a human neuron (based on Octupi's neurons), but who is to say we cannot perceive equally as fast as a computer ASCII. There is a whole area of computing hardware, MEMs (Micro-Electro-Mechanical Systems) that could be implanted in our brains to work with what we have now.
Jan N (Wisconsin)
@glen broemer, it's because very few (if any) of us have instantly accessible 100% perfect memory of everything we've ever learned, done, said, read, seen, intuited, and all the emotions we've ever felt. Of course, computers cannot feel. This article disgustingly attributed negative human attributes to this machine, as if it were gloating over the eventual "murder" of its opponent.
Alex (camas)
This article is a very disturbing insight into the future of humanity, when our machine creations will have power over us the way we have power over animals today. And while some of us humans may be pampered and taken care of like my cats, more likely we'll be (metaphorically) neutered and spayed, and used for purposes we are incapable of contemplating, due to our limited thinking capacity. But, of course, the machines will believe the world is better because of their existence, just as most of us believe the world is (currently) better with us humans. We are but a stepping stone to a higher intelligence (and maybe even a higher consciousness), no different than all the animals before us.
Bob Bruce (NJ)
@Alex Sounds like "The Matrix" to me, & I'm hard pressed to disagree with your prediction.
Pete C (Az)
Sad but likely true
Snip (Canada)
@Alex How will machines "believe?" They only compute.
C. Whiting (OR)
Who built us? Who passed the baton of world's cleverest to us? Dust, sand, water, and light? A creator god? Did we truly pull ourselves up by our primordial bootstraps? Is this the first instance of a creator nodding resolutely yet somehow sadly to the superiority of its own creation, as that creation runs on over horizons we will never reach? Although the artificial intelligence world is not my expertise, it is quickly merging with and influencing the world I know. The world I know is powered most fundamentally by love. Could Alpha Infinity ever begin to grasp that? And would I want it to? Is a chess wizard--cancer beating--pattern discovering oracle truly an improvement on our deepest and most profound achievement; Our (at our very best) valuing and expression of gratitude, grace, and the turning of the cheek?
S. Shainbart (Brooklyn M, NY)
I have doubts about the author’s conclusion. While I am certain that a future with “Alpha Infiniti” machines would replace many jobs and tasks that people do, it will not by itself replace human intelligence and humanity itself. As far as I understand, nothing about this type of Artificial intelligence can offer us what is meaningful and what is not; what is personally rewarding and what is not; who to love and who to not; what goals to set for this day, this month, or this life. Or what to personally care about and what not to. It will not have feelings or a need to be loved or a need to love and care for other, or a need to have and protect self respect. It sounds to me like it will be a great tool, helping us to do certain tasks better. Kind of like the way our computers today have taken over so many formerly human tasks with greater efficiency and capability. But just as they replaced certain tasks that people used to do, we just moved on to doing more of other things they can’t do for us. I think this will be another, albeit perhaps more extensive, version of that same development. Humanity won’t become obsolete so quickly.
anonymous (the burbs)
@S. Shainbart it seems a little naïve to think that technology is so benevolent. all of humanity is not on the brink of becoming obsolete, only those humans deemed obsolete by those humans who possess and control said technology.
Me (Earth)
@C. Whiting Unfortunately, not everyone sees the world through rose colored glasses. The reality of most humans is motivation by greed, not love.