How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out

Jan 20, 2019 · 54 comments
joshbarnes (Honolulu, HI)
Good to see Hal Abelson featured so prominently in this article. I had the pleasure of meeting and talking with him back in the ‘80s. His class is in good hands.
Mike L (NY)
You can’t govern machines that can teach themselves. And they will learn at an exponential rate. Without morals or ethics. Logically, the machines can find many reasons to terminate the human race. We’re destroying earth’s resources and climate for example. It’s a very dangerous and scary world to live in already. I can’t imagine the threat of AI on the human race.
Wasted (In A Hole)
There is nothing that says machines cannot be trained in morals and ethics. It may be more difficult than learning how to answer a phone, but in time. And if you are fearful of a world inhabited by smarter machines, think about what our lives will mean when machines are more ethical and kinder than us.
Dan (St. Louis, MO)
With all respect, there is no evidence to support the claim that AI is only inaccurate in face recognition and crime prediction because "..data is the problem. The results were biased because the data that went into them was biased". These bias problems have been known for years and there is still no fix. More data will not fix the problem. This is because the problem is in the widely used deep learning and machine learning technologies - these widely used AI technologies are not that intelligent. If you do not believe me, try to get your Google Android-based phone to answer a "how to" question the next time that you really need help.
Norm Weaver (Buffalo NY)
Whatever rules or limitations Western countries might try to impose on AI research or implementation will not be respected by hostile powers like China and Russia. Their AI research will not be bounded by any rules. So we must consider that if we limit ourselves too much, we would likely be committing unilateral AI disarmament. We need to allow research to the fullest depth of AI capability in order not to put ourselves at a potentially serious military disadvantage to those countries.
Denver7756 (Denver)
Whoever owns/is responsible for the Intellectual Property must be held responsible. no different than a medical device or normal (non-intelligent) software
Fat Rat (PA)
@Denver7756 Except software makers are NEVER held responsible.
OSS Architect (Palo Alto, CA)
There are a large number of current University research grants for "the hardening of machine learning". Making it more difficult to "hack" ML. One term of art used is, adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. IBM has contributed some freeware tools here: https://github.com/IBM/adversarial-robustness-toolbox. A more theoretical paper on the general problem here: https://arxiv.org/abs/1712.04248 In all my systems and network projects since 1986, cyber security has always been the #1 or #2 design goal, but it was always gone to the bottom of my clients priorities because of high cost and the lack of perceived benefit. We are much further ahead on securing AI and ML technology than any US business is willing to pay for.
Fat Rat (PA)
@OSS Architect Exactly. The funder will NEVER pay for security, for safety. They don't want it. Everybody else wants it, but has no power to make it happen. The only solution is to tightly regulate the development of AIs -- worldwide.
SWillard (Los Angeles)
Need to read Nick Bostrom's SUPER INTELLIGENCE. Just the thought process of programming 'morality', or even 'preserving humanity' in an AI arena is quite daunting. Fortunately we are nowhere near the 'singularity' (Kurzweil predicted 2025 -- not going to happen). If the General AI singularity ever does occur (replication of singular human consciousness in a machine), humanity will be eaten for lunch...
Fat Rat (PA)
@SWillard Yes! Anybody who has not read Bostrom does not understand the dangers of AI.
Jan Sand (Helsinki)
I make no claims as to profound understanding of AI potential but my information from various sources as a layman seems to indicate the processes produce results in a somewhat mysterious way and those results are frequently better than humans can produce. Since they are mysterious why is there an assumption they can be safe and controllable by humans who have a consistent history of creating terrible disasters like nuclear weapons and global warming and industrial destruction of the planet and an internet that is rife with a large variety of innovative criminality? Why is it assumed that human control is effective in making it useful rather than disastrous?
Gordon Silverman (NYC)
The impact of AI on our socioeconomic system is one of the immediate tsunamis that face our species - climate change being the other. AI itself is in a state of transition whose outcomes may ultimately prove to be threatening. The futurist Ray Kurzweil provides a scenario wherein “evolution” leads to a “Singularity” in which “technology” is indistinguishable from human intelligence and in fact exceeds that of its organic masters. It is defined as a “Superintelligence”. We have just begun to address the implications of such developments. While a number of experts have addressed these issues, Nick Bostrom has provided a framework in which the discussion should proceed (“Superintelligence: Paths, Dangers, Strategies”). These depend broadly on three mechanisms: limit data acquisition; imbue the Superintelligence with “worthwhile” objectives; maintain oversight. However, for each mechanism he also provides potential countermeasures by the Superintelligence with dystopian results for the species. The possible emergence of such a “machine” should also be a guide for our approaches to AI which, as it stands, is principally an “imitative” technology at the moment. Gordon Silverman, PhD Professor Emeritus, Electrical & Computer Engineering Manhattan College
Wilson1ny (New York)
@Gordon Silverman Perhaps it merely boils down to: Just because you can doesn't mean you should.
EHR (Md)
@Wilson1ny and also, "superintellgence" to what end? one person's "worthwhile" objectives is another's curse. witness, for example, all the evil committed in the world by people and governments who were sure they were doing God's will will a machine want to know for knowledge's sake? or will they use their superintelligence in human ways, to dominate, create, inflate, isolate, attack... can machines develop empathy? perhaps they will commit suicide
AutumnLeaf (Manhattan)
How? the simplest way - pull the plug! Simplistic as that, turn the machine off and that should stop it from doing things you do not want it to. It is not a person, it has not 'rights'. So kill it and be done.
Jan Sand (Helsinki)
@AutumnLeaf Since, in mny ways, the machines can think faster and with AI, better than humans, it be comes a race in who will pull the plug faster on whom.
John Brews ..✅✅ (Reno NV)
The underlying issue with AI is that it is a black box. Data in, recommendations out. How one led to the other is opaque. The learning of the machine is based upon a “reward” system, but whether the rewards are wisely weighed is uncertain. Nothing much in the way of regulation can inform how such black boxes should be constructed. However, much can be said about how carefully the recommendations are implemented. One item: humans should be in the loop.
John Brews ..✅✅ (Reno NV)
BTW, Facebook’s use of algorithms to make rapid decisions about the wisdom of content may prove to be a cautionary tale for AI.
Wasted (In A Hole)
The same can be said about humans. We are all black boxes, even to ourselves.
Chris (San Francisco)
1. To be clear, at this point, policy or regulation would not apply to A.I. per se. It would apply to the people developing it. One part of policy should focus on how to hold those people responsible for whatever happens. 2. If A.I. has as much potential as the experts say, for good or ill, we should be exposing it to "data" from the entire breadth and depth of human experience, and possibly other forms of experience. It should not be shackled to the narrow pragmatics of mere "business and innovation." Those are just the buzz words of the moment, from the perspective of some groups of people, in some countries. Instead, A.I.should be exposed to all the forces that make life worth living, to the tides of history, to the depth of compassion, to the sublime, to profound suffering, to the ethics and sentiments of death. If all you feed to the A.I. is the competitive tools of capitalism, you will predictably get an entity that can out compete anything on earth without remorse. 3. Is it too early to start talking about rights for A.I. entities? That could be a way to establish the precedent of compassion for consciousness that we might want such entities to embrace.
Shesh Mathur (Seattle)
Here's a simple recipe: 1. Take a ton of data – structured and unstructured 2. Clean it, making sure obvious biases are removed 3. Keep it aside as your initial training data set 4. Review the algo and modify logic that might introduce biases 5. Feed the training data from step 3 into the model and review the results 6. If the results are showing unintended bias, modify the algo and incorporate more data to make the training set more balanced 7. Repeat The most important underlying need is to have a set of standards that define various types and degrees of bias. If I don’t know why/what/how much I have to fix, I’ll use varying degrees of subjectivity which is a problem. The next most important thing is to introduce human intervention at key stages to review and correct biases.
Fat Rat (PA)
Why is nobody discussing the near certainty that AI will either wipe out the human race, or change it beyond all recognition? Gorillas encountered a more intelligent race, how well did that go for them?
DMZ (NJ)
"The results were biased because the data that went into them was biased-" The most important comment in the article. Why not an algo that checks for, and identifies, bias, of any type. Once identified, then the humans take over and discuss, adjust and then run through the bias algo again until the bias has been removed. Of course, there will be times when bias is wanted, e.g., a bias towards good weather over bad. All very complicated, like life.
Shesh Mathur (Seattle)
@DMZ While I agree that 'bias removal' is key and can be minimized by human intervention in an iterative manner, at the policy-formulation level it is super important to agree on a body of standards that determine what constitutes 'bias' and bake that into a set of policies that orgs like the OECD can accept and implement.
BlueWaterSong (California)
@DMZ "Why not an algo that checks for, and identifies, bias, of any type?" Simple answer that is very important to understand: Because once human behavior and/or outcomes are involved, bias is in the eye of the beholder. We are not talking about interpreting spectral data or material properties here. There is simply no such thing as "unbiased" in arenas involving human behavior or outcomes. The best we can do is to define a set of principles and then try as best we can to measure how close we come to achieving them. And if we should commit to that, well then it is just as important, actually moreso, to hold humans to that standard as it is to hold AI to it - and the metrics will be the same.
DH (Oregon)
@BlueWaterSong: The reason is that all algorithms upon which AI inextricably depends function within the constraints specified by the algorithms' original design. Therefore, searching out "bias" requires unconstrained (inspired) searching within fluidly changing contexts. That is, unlike living systems, AI has no means of or motivation for arbitrary attention focus and, oppositely, disengagement from the associated contexts of such foci. (Don't believe the marketing hype of the AI tribe.)
BobMeinetz (Los Angeles)
The first step to governing machines that can learn is simple - deny they exist. So-called "artificial intelligence" isn't dangerous, but the simpleminded conceit of programmers that biological intelligence - including the set of values, of compassion, of responsibility (yes, that's part of it), of the infinitely-variable, ever-changing chemical interactions in the human brain can be condensed into simple binary format. Non-biological learning is impossible - by definition - and as digital convenience aids (DCAs) become more sophisticated, drawing the distinction will become critical. That DCAs are already proving dangerous has never been more evident after the crash of Lion Air Flight JT610, when undertrained human pilots put their faith in an "artifically intelligent" anti-stall system that doomed both them and 187 other passengers who had entrusted their safety to them.
BlueWaterSong (California)
@BobMeinetz Many more airliners and passengers have fallen victim to "biologically intelligent" systems that doomed those who had entrusted their safety to them. And if "undertrained pilots" are put in the cockpit, then it's only a matter of time until they fail - that's not on the AI. AI will continue to improve, and will continue to be used and misused, like every tool.
BobMeinetz (Los Angeles)
@BlueWaterSong, on commercial flights digital convenience aids have only recently crossed the line into making live-or-die "decisions" based on hypothetical assumptions, about hypothetical situations, imagined by programmers sitting in Boeing offices with no ultimate responsibility whatsoever. Responsibility is everything, and like intelligence, there will never be a cheap, artificial substitute. The pilots on Lion Air JT610 were undertrained as a result of cost-cutting reliance on AI - just like the pilots of Asiana Flight 214, who were clueless on how to make adjustments on their final approach to San Francisco International Airport before their flight crashed in 2013. So no, the record of DCAs has been abysmal when it comes to making decisions when lives are on the line - and it will only get worse. Some take the position it's all about statistics - that less people will die with DCAs in the cockpit, or behind the wheel. I suggest they offer that as consolation to the family of Elaine Herzberg, who was mowed down by a self-driving, non-thinking, non-swerving, non-braking Uber vehicle in Phoenix last year, and see how far they get.
Jenny (Madison, WI)
@BobMeinetz Be sure to offer your condolences to the families of drunk driving victims. I'm sure they'd be happy to know that we could've invented cars that would make drunk driving an issue of the past but that we refused to do so.
Brad (San Diego County, California)
"Garbage in, garbage out" was a phrase I learned over 50 years ago when I first started to work with computer software. Neural networks (which is what most of AI is currently) "learn" the embedded racism and misogyny of our society when given data that is a reflection of hatred. There are some rues that should be implemented curb the worst aspects of evolving AI. 1. The input to and goals of a neural network should be reviewed by a independent organization, similar to the Institutional Review Boards in place at universities. 2. Neural networks should not be able to search the internet for data. Neural networks should be isolated in a variant of Sensitive Compartmented Information Facility (SCIF) with no outside connections. 3. The output of a neural network should not be transmitted electronically through the internet. It should only be readable in a SCIF. Will this reduce the utility of neural networks? Yes. It also prevents harm. Unfortunately, there will be some billionaire or corporation or nation who buys a ship, loads it up with computer and communication technology and will break all of those rules. If climate change does not get us, the Singularity will.
Forrest (Spain)
@Brad or maybe, just maybe the singularity will save us from climate change. Because it is quite obvious by now that we are not going to do it ourselves.
Pete (CA)
@Brad Capitol investments (AI) with the potential to generate huge profits, what could go wrong? “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
TB (New York)
The extent to which we are able to harness the extraordinary potential of AI for the benefit of humanity will determine whether the decade of the 2020s will be the best in the history of humanity or the worst. Right now we are on a trajectory for the latter, where the massive job destruction of AI and exponential growth in inequality it will fuel will be the tipping point to chaos on a global scale that will amplify current geopolitical tensions by orders of magnitude. This scenario makes comparisons to the 1930s rather quaint. Silicon Valley has failed us, rather spectacularly. It has already been "iterating" in re-architecting civilization, capitalism, and democracy, without the knowledge or consent of the rest of humanity. All to sell advertisements. In doing so it made a mockery of "listening to the voice of the customer". Either Silicon Valley and places like MIT execute the most consequential "pivot" in history, or history will be vicious in the judgement it renders.
John Brews ..✅✅ (Reno NV)
Yes, making the AI goal the improvement of the bottom line doesn’t incorporate much empathy for humans.
Pamela Michaels (Washington, D.C.)
It seems that the expansion of Technology constantly outpaces peoples' ability to see and deal with dangerous ramifications. As the AI machines 'learn' - and I don't care if it's how humans learn or not - perhaps they need to learn a system of ethics that will prevent them from making disastrous decisions.
b fagan (chicago)
@Pamela Michaels - it gets more complicated that "prevent them from making disastrous decisions" and we humans are going to have to face some uncomfortable decisions as we hand some control off to our created systems. Big example - in a world with cars and trucks, there are situations where death and injury become inevitable. Snap decisions are how people handle a lot of it, and we don't codify what the person is supposed to do. Soon, car control systems will have to make algorithmic, either/or decisions in situations like "steer to avoid hitting a) mother with stroller OR b) five adults standing nearby". Somewhere in society, in industry, in law, there's going to have to be decision making to support rules that come down to who will be hurt or killed when disaster of some sort is unavoidable.
Pete (CA)
Artificial Intelligence is a form of capitol investment. Nothing more, nothing less. Who owns it? They're responsible. "Too big to fail" doesn't apply. But its not "too smart to fail" either. And failures will happen.
Mike R (Kentucky)
@Pete thanks for making the most important point. AI is not a technical issue it is an ownership issue. 26 people own more than the next 3.5 billion people. It is primarily technology that makes this absurdity possible. Are we to sacrifice all human progress over history so a handful of morons can take everything? AI will accelerate this collection of wealth as the 26 or so people will primarily own AI. The issue is a social one not tech. AI is a great benefit to all except when it is made not to be beneficial for all. There is nothing automatic about the benefits of AI. The future society has a big problem in getting something simple understood. On the end of all the wires is just people.
Joe Ryan (Bloomington IN)
Surely, there should be a distinction here between teaching and doing. A.I. recommending a conclusion and a course of action is a lot different from A.I. taking that course of action on its own.
Fat Rat (PA)
@Joe Ryan Not really. As soon as people become comfortable with the 'wisdom' of AI, they will be all too willing to hand over the work and responsibilities of decision-making.
Chris (Michigan)
These machines aren't "learning" in the sense that you and I think. They're collecting vast amounts of data, running statistical analysis on it, and spitting out "answers" that are just the most statistically probable outcomes. What they learn, how they collect the data and where, and the list of possible outcomes is all controlled by human beings. In other words, AI (even in the foreseeable future) that has been programmed to analyze traffic patterns will not suddenly learn about the varying levels of heat involved in the consumption of spicy foods unless we tell it to do so. So when it's asked "How do we govern machines," you don't...nor do you have to. You use regular old laws that outline how society is to act, then you hold scientists to those laws.
Robert Stadler (Redmond, WA)
@Chris This is "learning," in the sense that it's pretty similar to how humans learn. It's just narrow in scope. The issue is mostly that some of our current policies and practices implicitly depend on things which are true for computers but not for people. A human can't look at a crowd walking through a subway station and recognize each person there to identify whether any of them have outstanding arrest warrants. If a computer can, should we do this? It's worth discussing.
Pete (CA)
@Chris “These machines do what they do because they are trained,” Mr. Abelson said. is a dangerous cop out. Who reviews the source data used to "train" them? How was it generated? What bias or manipulation was used in the process?
Vinney (Brooklyn)
@Chris How do you think your human version of "learning" differs? Your machine just happens to be wet. > [...] is all controlled by human beings There are plenty of examples of machine learning where the human controllers simply don't have visibility into how the AI arrived at its answer - and these are very early stages. It's not much of a leap to imagine a sufficiently-powerful AI adjusting some of the dials on its goals (even in an effort to satisfy the "given" goal, if you want to keep the human "controllers" in the picture.)
VJR (North America)
In the aviation industry, we say "FAA rules are written in blood" because many people died or suffered greatly in tragic aviation events that led to changes in aviation rules. I am deeply concerned that the artificial intelligence industry is going down the same route. Like drones and social media, it is way too much of an unregulated wild west in which many people will suffer as society learns to deal with them.
Scooby Dude (Washington DC)
The other challenge is how to collect data, either inert data like traffic data or active data that humans generate through their activities. Will I have the option to not allow others to use my data, i.e. Google, Amazon or Facebook for their own personal gain? Can I allow some of my data to be used or all of it? Can I receive compensation for my data? What if a company illegally uses my data. Can I sue them? What's the "value" of my data compared to someone in Kansas City Kansas? How do companies or organizations validate the data they receive? How data does an A.I. need in order to sustain itself and be effective? Just some thoughts.
Bob Krantz (SW Colorado)
@Scooby Dude To begin with, we have to define what is "your" data. Is that truly private personal information, or information about you? We would like to think what we write in a letter is private, but we should know that walking to the mailbox is not. When others wish to collect data on us, we should have a clear understanding of that--and may have to avoid some services if we do not want them to have that data.
Scooby Dude (Washington DC)
@Bob Krantz Bob, I agree with that. My data is PII. As for collecting on me, there isn't anything that prohibits someone from collecting on me. I concur that public information is just that, public, such as marriage, divorce, home ownership etc. Beyond that, what is my own data or information that I need to know about and control? I can't control my data if I don't know exactly what I generate, when I generate it and where it is within the "system".
Pete (CA)
@Scooby Dude Scooby, I would caution you to think of everything as data to someone. My friend scoffed at this idea as he played his Words with Friends on his phone. "My word choices are random". I said your latency in microseconds between letters tells someone more than you care to recognize. Don't hail a taxi with your phone in hand, gestures can be misinterpreted. Outcomes will depend on factors beyond your control.
OSS Architect (Palo Alto, CA)
There is a very profound legal problem to be solved here: liability. When something goes wrong, who do you sue? Who "pays"? That's the fundamental mechanism by which we try to compel ethical, moral, and inventive technical behavior in our society. The few accidents involving self-driving vehicles have resulted in complex legal cases. Tesla and others claim "the driver is ultimately responsible", but there is "shared liability" (man and machine) in most cases. Accidents happen because "systems" fail. They are "multi-factor", i.e. designers and engineers can't anticipate every mode of use. Now comes along machines that learn, adapt, and change their behavior.
YQ (Virginia)
@OSS Architect True, very difficult. It may be necessary to innovate novel mechanisms to hold people responsible- we already see issues with the current system, so I hope we can find a just alternative that can adapt to the current and coming technology. I certainly am not wise enough to predict a good solution, but perhaps something along the lines of a monetary pool pulled by VAT taxes that will be dispensed to address medical/direct damages only, not profitable fines to discourage repeat behavior. Companies that produce technology with consistent failures could have higher VATs applied. Thinking of that makes me cringe at the bureaucratic inefficiency, which is why I shouldn't be making policy.
Vinney (Brooklyn)
@OSS Architect It's tough to sue software-embedded moral philosophy - especially if its writing itself. I don't know yet if I think this is a good or bad thing. As has been said before me: AI is moral philosophy on a deadline.