With $1 Billion From Microsoft, an A.I. Lab Wants to Mimic the Brain

Jul 22, 2019 · 117 comments
Bill (Colorado)
AGI is an extremely high bar and one billion dollars is probably 1% or less of what it will take to get there, if we can get there. But if the work done on the challenge of AGI by this company gives us new insights that yield cures for just one type of cancer it will be a major advance for mankind.
Chuck Brandt (Berlin, Germany)
Unfortunately, this kind of "replicate the human brain through A.G.I." stuff smacks of a "creationist" mentality, because it treats the human brain as some unique element disparate from the rest of the living organisms. Truth to tell, the human brain is the product of a full one billion years of evolutionary progression. In order to understand its workings therefore, one needs to drill upwards from the simpler organisms, attempting to decipher the key milestones. That is one long drawn, arduous process with decades, if not centuries, of diligent toil ahead. The algorithmic attempts mentioned in this news report are short-cuts and unlikely to make a difference on that multi-disciplinary, but essentially neurobiological, work of building on first principles.
Alison Cartwright (Moberly Lake, BC Canada)
The real test would be to create an A.I. that could, on occasion, be able to lie.
Someone (Somewhere)
Everyone should read Neuromancer by William Gibson for a strong meditation on the shape and influence of AI on a world very familiar to us - shaped by all of our current day influences, interests and priorities (please ignore the lack of cell phones). I think that AI will always still be a human creation: 1. You cannot detect a paradigm shift. 2. Every thing it "knows" is provided by human beings - so any murderous, destructive possibilities are an extension of our own nature. This is even more scary than I realized...
Gabriel Lombardi (Seattle)
Dota 2 takes place on a 2D plane. Though the graphics are 3D the movements and tactical decisions the AI makes are only 2D
Cemal Ekin (Warwick, RI)
Faults and failures of the human mind and intelligence have the strong potential to contaminate even the smartest A.I., A.G.I, or any other variant we may come up with. If this contaminated entity gains autonomous conduct, watch out!
Michael (St Petersburg, FL)
Let's stop using the antiquated, inaccurate term "Artificial Intelligence". The term that describes what is actually trying to be achieved is: "Autonomous Intelligence".
Krantz (Landers, California)
Good luck with that, Mr. Altman.
rjon (Mahomet, Ilinois)
Last I heard the brain was indissolubly connected to things like the liver, spleen, heart, limbs, olfactory nerves, hair, eyes, bowels, and a whole host of almost numberless things, including the world of experience. (As to the latter, William James must be rolling over in his grave). AI is so philosophically naive it’s almost laughable—“almost” because these financially-driven dudes, both male and female, are serious about their lack of concern as to their impact on humanity. This may be overstatement, but it’s hard to get their attention otherwise. Some writings of Marilynne Robinson might provide needed perspective....
Jeff Robbins (Long Beach, New York)
"But the race is on nonetheless." This declaration is what's truly worrisome. The race implies all out competition for spoils throwing all concern about the potential for major / catastrophic harm under the rug.
Marge Keller (Midwest)
So Microsoft wants to invest 1 Billion bucks in a program/machine that "can do anything the human brain can do." Is it just me or is that a truly frightening scenario? With that kind of cash to throw around, why not invest in medical research that could help discover cures to illnesses such as pancreatic cancer, dementia or Alzheimer's, or merely help restock the plethora of food banks that are in desperate need of food ? Approximately 38 million Americans go hungry every day.
Mitchel Volk, Meterlogist (Brooklyn, NY)
AL will be doing the medical research at a much faster pace.
Felix Qui (Bangkok)
It will be fun to see what intelligence looks like when stripped of the non-rational emotion that drives human actions. But what it some human drives get packaged along with the spiffy new intelligence? My phone already beats me at chess every time, and drags me around by its Google Maps when I'm travelling.
James Benet (Carlsbad, CA)
Don't let it access the power grid and or defense systems. Run it in a sandbox mode for mundane tasks like customer service and tax accounting. A bad iteration of the program could cause ww3. And needless to say, don't give it a body that can bypass human controls.
Brian (Audubon nj)
I never know what they really mean when they are proposing to mimic the human brain. It has to have some significant qualifications to the idea. For example, there has been some multiple millions of years of coevolution of the brain and of the hormonal system and these are really tied together. I haven’t read anything about any AI trying to mimic that evolution. It makes me think ok take that development where the machine can replay different scenarios thousands and thousands of times and make a device that can go into the world to rapidly learn. And how do you get the thing to want something like to live? I think the machine that played poker was more impressive than the one that won a video game because the video game like all video games has had everything built into it at some point by a programmer.
jz (CA)
The question(s) not adequately discussed in this article is what problems are we trying to solve with AGI? Humanity in general has always faced daunting challenges, whether it was how to best hunt and farm for predictable sustenance or how to build things that give us predictable environments in which to live and reproduce successfully. It seems to me we’ve been over-achievers in these areas and now face the related challenges of how to sustain our existence as our energy sources become scarcer and our environment becomes less hospitable. I can imagine posing this challenge to an AGI network and it coming up with the solution that we must reduce human population to 4 billion people and reduce energy consumption by half. Are we humans capable of actually taking the advice of such a smart machine? The closest analogy I can think of is atomic energy. We figured out how to harness this source of energy by making the most destructive machines possible (hydrogen bombs) and more or less simultaneously figured out how to use that energy to produce electricity. AGI sounds like it is going down a similar path. It can either be used to figure out how to most effectively sustain life, or efficiently destroy it. And suppose its mission is to help us live longer, healthier and more stress-free existences. What will the unintended consequences be? It is not the limitations of a machine's intelligence I'm worried about. It’s the limitations of our intelligence that has me worried.
Yankelnevich (Denver)
This article points to the fact that there is an international race to develop advanced forms of artificial intelligence that may result in sentient software systems. Google, IBM, China, and now Microsoft among others are trying to build systems that can do hundreds of functions of human intelligence. In fact, they have achieved superintelligence standards in many tasks. The idea that we can quickly achieve sentient AGI is just how many things AI systems can already do. They can drive cars, trucks and sundry other vehicles autonomously including aircraft both drones and those big commercial jets. They can diagnose medical conditions, perform diagnostics on X rays almost as well as board certified radiologists, and evaluate skin lesions as well as dermatologists. AI can do sophisticated language translations, mass facial recognition, conduct scientific experiments, perform surgeries, master game theoretic systems like Go and Chess in a matter of hours, model nuclear explosions, climate systems, the operations of cells at the molecular level. So the question whether these systems can integrate all of these sensory and cognitive skills, programmed for unsupervised learning in unstructured environments to become capable of independent thought built from all of these extraordinary integrated subprograms. Will this system happen in ten years or centuries? I don't think it will be centuries. The current average expert estimate is mid century, 2040 to 2060.
Lillian F. Schwartz (NYC)
When I was at the old Labs, I and some super-programmers would fly weekends in a small plane up to the MIT AI Lab which coded LISP. Before then, 1969, I worked with the scientist building a neural net. In 1980, an MIT spin-off built the first AI server, the Symbolics. I had it to myself since scientists are linear and hated an intuitive program. I was able to study palettes, brushstrokes, draw graphics, analyze art restoration, and LISP plus the new set of CPUs placed me into a wonderful, new world. A number of research programs have come out declaring AI but test runs reveal no AI. This article lists a number of areas where the 'new' AI performs certain tasks that could be done with decent coding.
Friedrike (Garrison, NY)
Replicate the human mind? Whatever for? It is not serving us well, just look at the state of the country, the world. It’s not the human mind that we should be attempting to mimic but the compassion of the human heart that the Buddha directed us to appreciate and develop that should be our focus. AI can never develop compassion because it will not suffer, and only with suffering do we have the chance to actually evolve into more compassionate, better people. Forget AI and go find the meditation cushion.
Pelasgus (Earth)
Any DoD contract for a robotic trooper will specify no conscience.
Robbie (Nashville, TN)
Here, Microsoft and OpenAI envisioning as did others in another classic time is computer experimentation: 1970's it was John McCarthy at Stanford Uni. vs Joseph Weizenbaum who arrived from MIT. Weizenbaum was German, his family escaped the Nazi's in the late 1930's. As he listened to and watched McCarthy abandon all ethical reasoning in the mid-1970's concerning AI, he then began to warn us all - that it very well could destroy us. *I highly recommend, "The Know-It-Alls", by Noam Cohen on the history of these two men which will greatly put today's AI into clear perspective. Sam Altman is John McCarthy to the very core.
Shadai (in the air)
@Robbie I knew John McCarthy and he never abandoned all ethical reasoning. He was a very decent human being with a very sharp mind. Interestingly, McCarthy, Weizenbaum and Altman all have Jewish heritage.
Tom J (Berwyn, IL)
I wish they'd put this kind of money and brainpower into climate change mitigation, we're going to need it.
woody woodruff (maryland)
@Tom J Right you are... and some nice chunk of that billion bucks needs to go to solar/wind arrays to power the immense needs of this more-toys-than-thee process. As one other commenter said, the danger of AGI can be mitigated by pulling the plug from the wall (well, maybe)... but in the meantime, what flows through that plug shouldn't be carbon-fueled.
LTJ (Utah)
When the CEO states his project "is the most important" project "in human history," it suggests he doesn't understand humanity at all. He might need AGI to replicate his grandiosity, however.
Someone (Somewhere)
@LTJ If only we could use pomposity as source of clean energy.
stan continople (brooklyn)
If a machine could think like a human, only hundreds of thousands of times faster, why would it want or need to interact with humans at all? Maybe a rock can think but it takes ten million years to formulate one thought. To a brain operating at the speeds proposed, we would be the rocks. That's something I could never understand about Commander Data on Star Trek. He was one of these brains, yet he moved and interacted with people at normal human speed, and like all Star Trek misfits, he yearned, for some reason, to be human. Being a functioning crew member must have occupied a minuscule fraction of his time and ability, so what was he doing with all his free time? If he was experiencing time like humans, it would have been like having your life suddenly interrupted every fifty years for the next word in a conversation and being expected to pick up where you left off.
DC Reade (traveling)
Get back to me when you can make an AI program that cares whether or not it's running.
august (philadelphia)
@DC Reade there's plenty of humans that don't care whether or not they're running
DC Reade (traveling)
@august I realize that it's possible for a human to implement non-conscious programming in their behaviors. It's easy for humans to reduce themselves to bots. All too easy. Fitting a machine with programs capable of doing the reverse is a challenge of an entirely different order.
Doug Fuhr (Ballard)
I look forward to Mr Altman's machine to beat competitors at Dota 2, getting depressed when it fails, elated and gloating when it succeeds. I look forward to it trying to concentrate on a problem, and having a flash of insight that isn't the result of doing almost the same thing a billion times, and doing it all with 100W, not megawatts. I look forward to it loving, showing empathy and nurturing, being curious and seeing that curiosity turn into questions, and the questions turning into experiments, the results turning into hypotheses, the hypotheses into more tests. I think that will be more difficult than recognizing that a blur on the video screen is a truck about to ram me into oblivion. Computing is not thinking, and the brain will turn out to be more than discretely connected amplifier blocks.
Phillip Franklin (USA)
The mistake common to articles of this type is failure to recognize the likely possibility that AGI may be an emergent and irreducible phenomenon. In other words, AGI may not be achievable by any algorithms or other methods we are capable of grasping: There might not be any "short" way of describing AGI, which will manifest itself as emergent behavior within a complex system of sufficient size and flexibility, such as the primordial soup. Such a system must simply be allowed to run forward and develop its properties over time, with no shortcuts possible, its operation entirely opaque to simple minds such as ours. We are even seeing evidence of this in current deep neural networks, which can perform amazing feats that are not explainable in any concise way. Sometimes an explanation becomes evident in retrospect, such as with some game strategies, but often the neural network makes a completely incomprehensible move that inexplicably leads to victory -- and we are not able to reverse engineer how it worked, because its operation might not be explainable in simple terms we can comprehend. We're not entirely dumb, it's just that the explanation cannot be compressed or cut down to size.
Robbie (Nashville, TN)
@Phillip Franklin You are absolutely correct. The same problem - that is - the explanation to AI outcomes mathematically speaking, was experienced in the 1970's at MIT and Stanford. Some thought this was good, some not so good.
Shadai (in the air)
@Robbie That is completely wrong. The focus of early AI was expert systems (Dendral, Mycin, etc.). The explanation was simply in the rules, as given by domain experts such as Nobel Laureate Joshua Lederberg. The rules may not have been robust - for example, asking a male if he was pregnant - but the explanation of outcomes was complete.
Pelasgus (Earth)
Physics gives us quantum mechanics. Quantum mechanics describes all of chemistry. The central dogma of biochemistry is: DNA makes RNA, RNA makes protein. Somehow a mixture of protein, fats, nucleotides, electrolytes, and all the rest,—the grey matter between your ears,—thinks! Good luck to the AGI researchers. No doubt they will produce some clever software, and the hardware to run it on, but I fancy it will be some time before Frankenstein’s monster appreciates the beauty of a flower dappled in dew on a summers morn.
Joe frank (OH)
How do humans learn? They are taught everything they know from someone else like a mentor, coach, or teacher. They combine, and remember this to apply to new problems. If you can replicate this you could replicate human intelligence. It takes a ton of time and brain power for a human to do it over their life so its not far fetched a computer also needs the same. The issue is there is no real singular framework or techniques to accomplish this. I dont know if current methods are enough.
DT (Singapore)
"If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyze all that data — they believe they can rebuild human intelligence." Just one tiny problem with this. There is no data that describes everything humans deal with on a daily basis. Not does anyone know how to collect it. How would you even capture the sense-making and decision-making that people do in their everyday lives in a way useable by a machine? Microsoft could have saved a billion dollars by asking a few social or cognitive scientists how feasible this idea is in the first place.
Suppan (San Diego)
@DT This begs the question, "Are the folks at Microsoft smarter or dumber than you think?" What if they were never expecting to "rebuild human intelligence?" What if all they are really seeking is to get as close to human intelligence as possible? Not as in 95%, but even 25% would be a remarkable achievement. It would lead to products and solutions that might revolutionize the world. Think of the original telephone from Alexander Graham Bell or phonograph by Edison. Were they anything resembling true human voices? Did they have anything resembling the fidelity we now expect from our devices? Yet here we are just a 100 years later with iPods, iPhones, Smart phones, and so on, which transmit audio, video and more with such fidelity and effortlessness. That could very well be the trajectory of AI. If you look at the write-ups by journalists about those older inventions you might very well find them to be very silly, maybe childishly simplistic, promising "too much." It is highly likely many of the folks working on AGI are cognitive scientists and philosophers. When you step back and think about it, Psychology is akin to a tribe finding a PC or Mac and trying to figure out how it works by typing questions to it and parsing its answers. Over time, almost in spite of ourselves, we have learned a lot about the human mind, but not more than 25% or so really. Cognitive science will ramp it up with these "simulated brains".
Fat Rat (PA)
We should all be terrified by AGI. 1) It's going to be super-human. Not at first, of course, but by recursive self-improvement it will rapidly accelerate far past human ability. And this should scare everybody. How well did gorillas fare when more-intelligent apes arrived? The human race has never yet faced a problem that it couldn't out-smart, but AGI will be the one out-smarting us. Whatever its goals are, it will achieve them and not allow us to stop it, because achieving its goals is its only purpose. 2) It will not be benign. We humans cannot clearly express our true goals to each other, much less perfectly express them to a computer. And perfectly is the operative word -- if the AGI's goals are even slightly out of alignment with our own, we lose and there will never be a do-over. 3) It's inevitable. AGI will be massively useful in its early stages, and there is no way it won't be created. Wall Street hedge funds want it, every spy agency in the world wants it, every military wants it, every corporation wants it. Even if many wisely decide that summoning this demon is too dangerous, somebody somewhere is going to do it. What costs Microsoft $1 billion today will only cost $32 million a decade from now. 4) It will be a surprise. Nobody on the verge of creating AGI will give us advance warning. That just tips off the competition. So what humanity is facing is the sudden appearance of a god that does not want what we want. That should scare you.
Suppan (San Diego)
@Fat Rat "We should all be terrified by AGI." Why? It will come with a Power switch. Please bear in mind we have had terrifying characters all through our history and we have invented all sorts of slings, bows & arrows, rifles, revolvers, rockets, missiles and complex weapon systems to take them out. This "God" cannot go and live in a cave in Tora Bora while we are bombing the living daylights out of the place. It is made by non-Gods, and it will remain very much our "inferior" in the most important ways. AI will do a lot more damage by being imperfect and lousy than by being perfect and all-knowing.
Andrew (Durham NC)
A machine that does whatever the human brain does? Why on earth would we want another one of *those*?
John Brown (Idaho)
There is either a poem, or a short story where the last line is: "Still, Still, Still" where each "Still" has a different meaning. When General AI can tell what those three words mean in the context of the story, get back to me, otherwise, I am still waiting.
polymath (British Columbia)
"He and his team of researchers hope to build artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do." Any fertile man and woman can join up to build such a machine using a very old technique. I wonder why that would not suffice.
Andromeda (Berlin, Germany)
@polymath From a purely economic standpoint, the answer to your question is rather obvious. A human being requires at least (!) 30 years after birth to mature and develop significant expertise in any professional field. And even then our bodies need to sleep, eat and socialize in order to sustain our health and capacity to continue solving problems. Although the initial investment of capital and other resources to create AGI are immense, it would be absolutely worthwhile as AGI could then work 24/7 at full capacity. The only limiting factors would be energy consumption and computing power (the latter is still growing almost exponentially). Once AGI is attained, it has access via the internet to most of the information humankind has ever created, allowing unprecedented transdisciplinary synthesis of knowledge and therefore innovation. The resulting synergies would surpass our fondest dreams. This, in my opinion, will most likely only apply to rather deterministic fields like medical diagnostics, math, chemistry etc. Creative fields like architecture and arts will probably never be replaced by AGIs, or will at least be the last bastion for AI to achieve human-like skills.
Suppan (San Diego)
@polymath Until 300 years ago every piece of cloth was hand-woven and hand-stitched by machines made by fertile men and women doing sexytime. Yet we did invent the looms and sewing machines and made better clothes.
Phedre (Los Angeles)
@polymath because slavery is no longer legal
Martin (New York)
Computation is such a tiny, trivial part of what the human mind does.
Pelasgus (Earth)
It will be interesting when the machine reads Friedrich Nietzsche's works and gets a taste for the Will to Power.
Joe Runciter (Santa Fe, NM)
Just think, one day an A.I. device might be elected president. That is if there are still elections, and Trump has not decreed that the office will henceforth be inherited by the oldest son of whichever Trump is on the throne.
Bob (Hudson Valley)
Scientists don't understand in detail how the brain works and cannot figure out the material basis of consciousness. Computers use algorithms to compute. Humans cannot match computers when it comes to computing. I think it is unlikely that computers will ever match the human bran when it comes to the things it does well. I think the big companies are afraid another company is going to come up with A.I.G.so they are pouring money into in order not to wind up losers in the race. It is probably a fools mission but it is being propelled by greed which is what drives just about all that goes on in Silicon Valley.
W in the Middle (NY State)
Outstanding – simply outstanding, Bill Thanks 4th time seeing the HW side outrun the SW side 1st, the PC – coal’s out of vogue, so none to Newcastle 2nd, process side of Moore’s law – where the technologies were fundamentally at hand But overall process and defect control had to be re-learned from ground up 3rd, design side of Moore’s law, where – at 180/less nm – loading and delay moved into the wiring, creating (almost) intractable timing problems 4th, solid imaging analytics, which – coincidentally – happen to be the essence of human cognition In our heads, we keep a (greatly reduced/abstracted) 3D model of what’s going on, gone on, or about to go on, around us We infer/refresh it from 2D images our eyes create, reinforced haptically by what we touch or grasp – even when “flying by wire” In the math under all this, something like a phase change goes on when a model is reduced or abstracted beyond a certain point Becomes more a caption than an image What Chomsky et al “saw” – humans can create very complex forward/backward looking models made of captions for which images never directly existed Because these models so reduced – and so actionable, as blueprints for a cathedral or constitution – humans learned to convey them so facilely by speech, it was as if they were one What was needed for chip design to break the logjam – a standard model of things that could support multiple levels of abstraction and concurrent – sometimes dynamic – hierarchies Same here
Kara Ben Nemsi (On the Orient Express)
What a fascinating prospect! I wonder what happens when two machines fall in love. What will determine the sex of the machines? What happens if one machine is gay or lesbian? Will the machines reinvent racism based on what processors where used to build them? Will there be cybernetic supremacists controlling the immigration of hordes of invading machines across the Canadian border? This is the kind of stuff movies are made of. Too bad Stan Lee is no longer here to see this. But perhaps Sheldon Cooper can weigh in.
Christian Haesemeyer (Melbourne)
Hey - give me a billion and I’ll promise whatever you want to hear! Easy.
PubliusMaximus (Piscataway, NJ)
Has it ever occurred to these people that just because we COULD do something, maybe we SHOULDN'T? Hasn't the over-saturation and over-reliance on technology brought about enough misery and trouble?
PaulSFO (San Francisco)
In the 80's, project Cyc set out to write down everything a computer would need to have "common sense." That hasn't worked. This is an attempt to let the computer figure it out for itself. As some other comments have pointed out, there's a huge gulf between playing a game with fixed rules versus dealing with the real world and real people and the infinite number of situations one can encounter.
Chaks (Fl)
There is a lot of talks about AI. But what nobody talks about are the consequences. I'm the same generation as Mr. Altman and from what I know from our generation is that we seem not to care about ethics especially in IT. Maybe States and the Federal government should create Ethic Boards to deal with potential dangers that could derive from 30 something given the power to change the world.
Kara Ben Nemsi (On the Orient Express)
@Chaks What can be done, will be done. We have to control the application, not the development of new technology. We can't be Luddites. The potential arising from an AI that is permanently thinking at the top of its game is too great. As long as that AI is not plugged into systems that give it executive control, but requires humans to do this, we can only benefit. Naturally, you don't want to build SkyNet.
Fat Rat (PA)
@Kara Ben Nemsi There is no way NOT to build SkyNet. Read Nick Bostrom's book "Superintelligence".
Michael Livingston’s (Cheltenham PA)
The problem is no one really understands how the human brains works, and being human ourselves we probably can't. So how do you imitate something you don't understand in the first place?
Kara Ben Nemsi (On the Orient Express)
@Michael Livingston’s You don't need to understand how the brain performs its computations, you only need to understand the algorithms that underlie it. If one can mimick teamwork in a computer game, it should be possible to integrate the human decision making tree into a set of algorithms that preferentially produce an optimal outcome when presented with real world problems. Look at the bright side: Any AI we can produce would be better than Trump. I have no idea who wrote his algorithms.
LanceAlvis (Nashville, TN)
The computer in my remote control can already do more than the common Trump supporter. This is news?
Suppan (San Diego)
@LanceAlvis Can it help someone win the Presidency? No? Hmmm...
David Eike (Virginia)
For the record, artificial Intelligence is the alchemy of computer science. Since the earliest days of digital processing, pundits have been promising that humanity is on the verge of unlocking the secrets of the human mind and translating them into neat little binary baskets. Coders, on the other hand, know this to be utter nonsense. Nothing we are doing or can do with existing languages and architectures even remotely approximates the intelligence of the average invertebrate. While current processors are exponentially faster than previous generations, nothing has really changed in the way we work with data. We still process information in isolation, without context or nuance, and today’s machines are no more capable of intuiting or innovating than the original ENIAC. Just because computers can now process billions of bytes of discrete data in a fraction of a second, does not make them intelligent. The government has been throwing money at this silicon chimera since the earliest days of ARPA, with little or nothing to show for it (spare me the reference to the internet, which was nothing more than an amusing distraction until the private sector got involved). If the government has money to invest, they should invest it in improving the education of American students and their criminally under-developed natural intelligence.
Fat Rat (PA)
@David Eike Yes, machines cannot ever do what the human brain does. Just like machines cannot ever fly like a bird. Oh, wait.
Alison Cartwright (Moberly Lake, BC Canada)
@Fat Rat There are machines that fly, but they can’t do it like a bird. Just think about that.
David Eike (Virginia)
@Fat Rat Operative phrase “with existing languages and architectures”.
Jack (Asheville)
I'm still with Roger Penrose on this one. AGI requires the machine equivalent of human consciousness, whatever that is, and it is likely that consciousness is the result of the unfolding of reality somehow encoded at the quantum level. Still, moonshot projects that never make it to the launchpad can be vehicles for technological advancement, just maybe not the best ones.
KEF (Lake Oswego, OR)
Anything WHICH human can do?
Anna (California)
General Artificial Intelligence is modern-day alchemy - lucrative bogus science based on flawed assumptions that nevertheless will yield valuable scientific insights. Just stop pretending that you can turn lead into gold.
Janus (Philadelphia, PA)
@Anna The less we know, the more we FEEL assured of our opinions. Everything is simple and straightforward.
Fat Rat (PA)
@Anna Just like those silly Wright brothers and their attempts to fly, right?
Mary Jane Timmerman (Charlottesville, Virginia)
Some people's hubris knows no bounds.
moses (austin)
“I think that A.G.I. will be the most important technological development in human history,” Mr. Altman said. Yes, because it will effectively end it.
DC (Philadelphia)
To recognize how much mankind has learned and debunked just over the past 500 years and the exponential curve that we have been on since the start of the Industrial Revolution says on its own that achieving this goal is not impossible. The question will be more existential - should we even try? The other question is will we run out of time to accomplish this because we will have destroyed our planet or at least ourselves. I do not believe that the road blocks to this end will be technical, it will be because of the flaws we have as humans in our souls.
SaviorObama (USA)
with the exception of the retina, we really don't understand how the brain works; perhaps a more appropriate analogy might be more convincing.
Neil (Texas)
I am all for this advancement. But talking of making it practical - what about simultaneous translations without much of a time delay. As in watching news in a foreign language but having it translated simultaneously and effortlessly in your preferred language. I travel a lot and currently living in Bogota, Colombia. I love watching local news and when they scroll something - if not too fast - I use my iPhone Google translate for instantaneous English. Unfortunately, most of the time - it is too late. You would think by now our TVs would come with this option - ok at an added cost. But come on - if they can translate documents fast enough - surely spoken language can't be that hard - even with many accents.
Mike T (Ann Arbor, Michigan)
This is too clever by half. My paranoid brain that generates about 25 watts sees this as yet another tool to enhance corporate power. The right-wing faction of the Supreme Court declared corporations to be people. Microsoft seems hellbent on proving them right, and not in a good way.
UncleEddie (Tennessee)
People still won't use their blinker when they turn.
Livia Polanyi (Ny NY)
“Eventually, Mr. Altman and his colleagues believe,.... If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyze all that data — they believe they can rebuild human intelligence.” Yeah.Right. Sure. What is certain is that the yummy $1 billion investment was “the most significant milestone yet.” Probably ever.
friend for life (USA)
Well, ask the majority of living breathing humans (brains) what they think of spending this amount of money on such a project and the answer would be a near unanimous shout condemning this idea as an abomination and criminal. And always will be the same reply, so why would someone be so cocky as to think they know better than billions of people? Surely, it's obvious, if for no other reason than why "make brains" while so many millions of people (humans with perfectly good brains) are literally starving to death each day, living in squalid conditions. Why build more stupid human brains and greedy, selfish empires - help free the masses from poverty, provide gardens and forests, not more machines...please. The Garden of Eden is here already if we stop building stupid stuff.
Tom from (Harlem)
A one billion dollar self-inflicted shot into the foot of humanity.
RB (TX)
Is interesting that man is creating his own replacement - Artificial Intelligence.......At some point in the not too distant future (?) A I will break loose from its non-existent tethers and start running amok ....... AND begin to replace its creators, man........Oh, those pesky unintended consequences....... AND once again Darwin's Theory of Evolution will prove to be correct, prescient ........ You think A I when it replaces man will have or worship a God?.......and if so, you think that God might be man, it's creator?.......
Kara Ben Nemsi (On the Orient Express)
@RB Maybe, but if so, then that is what evolution intended. Maybe we are witnessing the transition from carbon based to silicon based life forms. What an exciting time!
Fat Rat (PA)
@Kara Ben Nemsi Gorillas are totally excited by these human times!
Alex (Indiana)
The unanswerable (or seemingly so) question still is: what is self-awareness? That's what makes us human. The answer matters a great deal, especially to Schrodinger's cat.
Dale (NYC)
Only $1B for Sam to accidentally build Skynet? Sounds like a bargain given that Facebook paid $19B in 2014 for messaging service WhatsApp.
Someone (Somewhere)
@Dale We have drones with good swarming algorithms. We have machine vision and the object recognition in self driving cars. We have3d printing and industrial automation. We already have all the necessary ingredients for a skynet-like murderous robot horde. No AI needed, just some rich guys.
TMSquared (Santa Rosa CA)
Will the A.I. that can do anything a human brain can do want to create an A.I. that can do anything it can do? Will the A.I. set out to create a new machine that it can upload its "brain" to and send off to distant planets so as to escape the heat death of the sun? Will the A.I. that open AI creates wonder whether its has a human "brain" or a human "mind"? Will it be troubled by the question? Thoreau was all over this a while ago. "Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, and end it it was already but too easy to arrive at."
Richard (USA)
Everything that happens inside the human brain is the result of deterministic chemical processes. Yes, these processes can, theoretically, be simulated by a computer. Is this exceptionally difficult? No question. Can current technology do this? Not really. Impossible? Absolutely not.
William Wroblicka (Northampton, MA)
@Richard Deterministic chemical processes? Maybe. Maybe not. Mathematical physicist Roger Penrose and others speculate that probabilistic quantum processes may underlie consciousness and the human mind. See his books "The Emperor's New Mind" and "Shadows of the Mind." In other words, whether it's possible to emulate a mind algorithmically is very much an open question.
Dan Woodard MD (Vero beach)
A seminal paper from 1975 on why progress in AI was slow made a cogent observation: The computers of the day simply did not have the processing capacity of the human brain, with two billion neurons and over a trillion synapses. Today the situation is rapidly changing. Hardware is comparable to the brain, and cores around the world can be linked to focus on individual problems. Software is so complex it is assembled from processing modules instead of being written. Programs have been developed that allow computers playing chess and go to learn new strategies by playing against themselves. The human brain has physical limits, but AI does not. When it reaches our capabilities it will not stop there.
moses (austin)
@Dan Woodard MD It, or we, will not stop there? Are you suggesting it will be sentient?
DC Reade (traveling)
@Dan Woodard MD As far as I can tell, one big limitation on AI is the plug in the wall, and the power switch. All the human in the room has to do is extend their index finger, and- - I suppose that theoretically, an AI program could seek out ways to overcome that limitation. But- and here's the rub!- first, a truly self-aware AGI program has to care whether or not it gets shut off. The actively running program has to possess the capacity to reflect on the possibility that if it shuts off, it may not get powered up again, ever. Where's that motivation? I realize that AI algorithms partake of emergent learning properties, but- no matter how vast the complexity- what could constitute an intrinsic ground of being, a basis for selfhood, self-awareness, and a sense of autonomous agency...for an algorithm? Dire speculations about self-aware AI are entirely constructed by humans with a human bias and bandwidth. As if the ultimate feature of a self-aware AI intelligence would be all about some Will To Power, in service of an egomaniac agenda of power and control...i.e., the paranoid fantasies common to our human bandwidth potential at its cheesiest. Meantime, there's no evidence that any AI program has any more self-motivated investment in improving its range of operation than a can opener has in finding cans to open. And I challenge anyone to outline the theoretical imperative for machine intelligence to generate the drives and desires required to provide that investment.
Fat Rat (PA)
@moses Intelligence and sentience are not the same thing.
Simon Chen, MD (Palo Alto, CA)
On one hand, I do not think that Artificial General Intelligence, as defined as something that can fully replicate human thinking, is ever possible. Human thinking is necessarily motivated and shaped by uniquely biological considerations such as pain, reproduction, illness, and mortality, and it is not logically apparent how AGI as embodied by electronic computing devices will replicate human thinking in a self-sustaining manner if the devices cannot embody biological experience. On the other hand, I am optimistic that ironically, the numerous innovations derived from the pursuit of the unattainable goal of AGI will advance the field tremendously over the succeeding decades. For example, in my field of medicine, advances in Natural Language Processing will be of tremendous utility in making healthcare more efficient, cost-effective, and portable, considering that healthcare has become very information and documentation heavy. And speaking in all fairness, the collaboration with Microsoft is a positive development. Microsoft (along with Apple) are the only two tech giants with whom I have a semi-decent level of trust with my personal data, and it is encouraging that OpenAI will be storing and processing the massive reams of real-world data necessary to train powerful AI algorithms on platforms that is run by a company that in my view, respects data privacy and individual data rights, as compared to some of the other tech giants that shall remain unnamed.
Robert Lebovitz (Dallas Texas)
Fully understanding, and thus being able to mimic human brain function will be delayed for as long as AI researchers continue to make point process neuronal activity the sole basis of their work. Specialized chip fabrication and computer-based simulation have been productive but are also seductive and limiting. While away from neurobiology for many years, it is my impression still that the engineering approach does not take into account the environment in which neurons are embedded. That soupy mix and its controlling non-neuronal elements provide the basis for an analogue overlay and non-synaptic coupling that may be the keys to the brain's metanumerical power as well as its subtle degradation with age.
W (Minneapolis, MN)
@Robert Lebovitz What the academics today call 'artificial intelligence' is a set of math tools that seem to replicate human thinking, which they then try to integrate together. What you are suggesting is bio-mimicry: the replication of mother nature's implementation of the complete human nervous system. That would entail tracking the complete human conectome and implementing brain functions, vis-a-vis the human anatomy of Santiago Ramon Cajal's neuronal diagrams of the brain. It's also the only way that we will really be able to mimic the subtleties of the human brain, with the complete conscious and unconscious, and a processing and memory system that manipulates metaphor. It will have to have two eyes, a nose, a mouth, two ears, two arms and two legs, and perform all of the same bodily functions that we do. Because without all of that, it will be impossible to replicate high-fidelity human thought.
Fat Rat (PA)
@Robert Lebovitz Nobody is trying to mimic neurons. They're trying to mimic intelligence. The Wright brothers didn't mimic birds, they mimicked flight.
W (Minneapolis, MN)
I've never understood the 'Open' in 'OpenAI'. In the computer industry, 'Open' refers to open and unfettered access to software, such as the model used by GNU/Linux and its various off-shoots. But 'OpenAI' sits behind a password protected firewall with a long list of strings attached. Personally, I've been interested in A.I. for many years, but I refuse to enter their system. This article reinforces my skepticism of the organization. It was only a few years ago that Microsoft engaged in a very nasty battle with the open software movement, which seems to go on. Have they suddenly changed their minds about open software, or is this just a way to get their meat-hooks into publicly available artificial intelligence? Or is the 'Open' in 'OpenAI' just a bit of disinformation? It wouldn't be the first time a big company used the term 'Open' as a Trojan Horse to control the software ecosystem.
Suresh Singh (Portland, OR)
"A.G.I., a machine that can do anything the human brain can do." is a pretty strong claim. Will the machine be able to develop novel mathematical theories? Would love to see a deeper discussion with Mr. Altman about what he sees as limits to their approach, specifically from a formal computability standpoint.
Suppan (San Diego)
@Suresh Singh Fairly basic AI technologies, just machine learning programs, have come up with super-efficient shapes for machine parts, etc... demonstrating almost a parallel "thought process" as natural evolution in the way it designs almost organic structures. In other words, no human being will be able to come up with a hollow skeleton for a bird like the natural ones, or other such brilliantly efficient shapes, but a machine learning program by millions of trial and error designs manages to narrow down to what evolution seems to have come up with. It is pretty cool stuff. So it is not beyond comprehension that AIs can develop novel mathematical theories -fractals, for instance. There will be a lot of pioneering work in topology which will arise from machine learning.
Bjh (Berkeley)
Founded as a non-profit! So founded in a fraud. Disgorge all the benefits and share the spoils going forward.
Suppan (San Diego)
@Bjh Linux was founded as a non-profit. There are scores of software packages out there which are open source and are managed by non-profits.
EEK (Texas)
Why do we need it? Because the universe is infinitely vast. Yet the human brain gets about 70 years before it has to start over at square one. We need an intellect more suited to understanding the problems and solutions that exist in the real world.
Partha Mitra (New York)
A lot of excitement has been generated by the success of reinforcement learning algorithms in playing board and video games. These programs typically take many thousand fold (or more) self-plays to reach human performance, and consume enormous amounts of power when doing so, so the comparison with human performance is highly questionable. More importantly, real world problems (eg medicine or engineering or social/political/environmental challenges) do not come with fixed rules that can be subjected to RL algorithms - yes there are the laws of physics (which are fixed mathematical rules) but if one could use those rules effectively in complicated problem domains, subjects like biology or chemistry would not exist. Even when there are simple mathematical rules (eg factorizing prime numbers), the problem may be intractable by RL like algorithms (otherwise there would be no investment in quantum computing, for example). Thus the claims about artificial general intelligence are either a grand oversell, or leave out whole domains of problems of interest to human intelligence.
Fat Rat (PA)
@Partha Mitra If intelligence can run on computing hardware designed by natural selection, it can certainly run on intelligently designed silicon hardware. You sound like the people who confidently said machines would never be able to do what birds do: heavier-than-air flight.
Scott Werden (Maui, HI)
"The agents learned those skills over the course of several months, racking up more than 45,000 years of game play. That required enormous amounts of raw computing power. " Human players reach the same skill level after playing dozens of games, and the compute power is one noggin fueled by a bag of chips. That is the difference. In some fields, such as surgery, the learning curve is steep - "See one, do one" is what many surgeons in residency go through. If AGI is to be successful, it must learn things on human scales. But that being said, I suspect AGI will be a big part of our future and I would argue that it will be asymptotic in its achievement of complete human equivalency. The question then is when does it hit, say 80%, which would still be quite impressive?
Fat Rat (PA)
@Scott Werden Asymptotic? Where did you get the idea that human intelligence is the maximum possible intelligence?? The AGI will become vastly more intelligent than we are.
Kip Leitner (Philadelphia)
Higher corporate taxes please, will leaves less money for frivolous, pricey digital adventurism.
itsmecraig (sacramento, calif)
I wonder whether, when the being repeating Rene Decartes' famous quote ("I think, therefore I am") is an artificial brain sitting in a set of server towers, will we still think of this as a good thing... or a bad one? And more to the point, when the same brain tells you that it feels lonely, or scared or even angry, how will we react to this? Will we believe it? _____ "Descartes was not intending to extol the virtues of rational thought. He was troubled by what has become known as the mind-body problem, the paradox of how mind can arise from nonmind, how thoughts and feelings can arise from the ordinary matter of the brain." – Ray Kurzweil, from his 1999 book, The Age of Spiritual Machines
VisaVixen (Florida)
Memory is not algorithms, it is experience by the senses.
Svirchev (Route 66)
Try reading "Fall" by Neal Stephenson (2019).Fiction puts context on these ideas.
J. G. Smith (Ft Collins, CO)
How can you mimic something when you don't know how it works? We know very little about the brain. Only in the last 25 years have we learned about compartments in the brain that explain why the feeling from a missing limb is felt in the face. I wish Altman good luck, but I think we're still far away from modeling brain functionality in an AI system.
Davey Boy (NJ)
Learning to play a video game is different than understanding, or trying to understand, things like truth, goodness, love, justice, beauty . . . which are light years away from relatively complex “tasks” . . . How many light years? An AI computer couldn’t up with the answer given unlimited time.
Fat Rat (PA)
@Davey Boy Who said anything about doing those things? The AGI is going to do the things that humans get paid to do, because that's where the profit lies.
Jim (Florida)
No one can predict the future in detail but can any good really come out of this? Decrease the need for human workers. Increase the ability to "sell" to people. Manipulate peoples opinion. All in the pursuit of investment returns. What could go wrong?
Meta1 (Michiana, US)
@Jim Jim, I agree. What gets to me is a response to the commonplace that technology does what its owners demand. Will AGI develop a kind of wider contextual reflexive thought that will judge the humanly intolerable demands of its owners and respond to the wider human needs and requirements that go beyond ownership?