If Algorithms Know All, How Much Should Humans Help?

Apr 07, 2015 · 57 comments
Ilya Geller (New York)
I taught computer to understand text, I discovered and patented how to structure any data: Language has its own Internal parsing, indexing and statistics. For instance, there are two sentences:

a) ‘Fire!’
b) ‘Dismay and anguish were depicted on every countenance; the males turned pale, and the females fainted; Mr. Snodgrass and Mr. Winkle grasped each other by the hand, and gazed at the spot where their leader had gone down, with frenzied eagerness; while Mr. Tupman, by way of rendering the promptest assistance, and at the same time conveying to any persons who might be within hearing, the clearest possible notion of the catastrophe, ran off across the country at his utmost speed, screaming ‘Fire!’ with all his might.’

Evidently, that the phrase ‘Fire!’ has different importance into both sentences, in regard to extra information in both. This distinction is reflected as the phrase weights: the first has 1, the second – 0.02; the greater weight signifies stronger emotional ‘acuteness’.
First you need to parse obtaining phrases from clauses, for sentences and paragraphs. Next, you calculate Internal statistics, weights; where the weight refers to the frequency that a phrase occurs in relation to other phrases.
After that data is indexed by common dictionary and annotated by subtexts.
This is a small sample of the structured data:
this - signify - <> : 333333
both - are - once : 333333
To see the validity of the technology - pick up any sentence
Eric (Colson)
I have found that machines and humans can be quite complimentary in making decisions that require both empiricism and cognition. For example, both are required in order to recommend apparel items to customers. Stitch Fix (disclosure: I work there) is a styling service that combines expert-human judgment with machine learning. The "algorithm" has to do many things - some of which are better done by machines, while others are clearly in the human purview. For example, finding patterns in data, estimating distances and similarity measures, counting the co-occurrence of purchases between items ...etc. -- these things require billions of rote (and sometimes complex) calculations. Humans are capable of doing this but they are far too slow and prone to error.

Yet, there are other tasks require more cognitive skills. For example, evaluating aesthetics and themes from a set of images continues to challenge our best machines. And, machines have not demonstrated aptitude for empathizing with customer's needs expressed through free-form text or other forms of unstructured data. This is where humans are differentiated and add value beyond their higher cost of processing (and, even here machines can help, but currently only in a way to assist the human processors).

For problems that require both rote calculation and cognition, the abilities of machines and humans can be additive.
OSS Architect (San Francisco)
"Big Data" works well on models of "low complexity", e.g creating credit scores.

As you try to build more complex models, it starts to fail, because there is simply "too much data". You run out of computational resources to use the algorithms on the size of the data sets, or the program has to run so long that the system being modeled changes, temporally, before the analysis is done.

As Gary King of Harvard suggests there is value to humans "tweaking" the "output" of data analysis, but that is not mathematically the correct approach.You can't change (editorialize) what the machine comes out with, but you can "tweak" what assumptions the machine starts out with.

You an submit Bayesian inferences to Hamiltonian Monte Carlo Simulations. Th Bayesian bit is a valid mathematical construct of your Human guidance. The Hamiltonian bit is big data operating over huge dimensions of data. The Monte Carlo bit is a mathematical technique for establishing a "reality" between the human assumption and the data being crunched.

Humans have to communicate with big data programs in a precise and mathematically valid way. You can't skip this part.
steve (asheville)
I didn't see any examples of real life successes of "Big Data" "procedures, other than the artificial situations of Jeopardy and chess contests.

I enjoy working/playing with computers, but they are not smart.

Even the best programming hasn't produced a "wise" computer.

So, just as with any comment from a human "expert", I believe it is necessary to independently check it with my internal analytical engine.
johndburger (Boston, MA)
The author mentions at least one successful application right up front - the ads being served to you are typically statistically targeted based on lots of data from lots of people, with an aim toward predicting what people like you will respond to. Lohr also mentions the consumer lending markets.

There are plenty of other examples - energy companies use models of demand to make predictions about how much reserve power to by on the various markets, etc. For decades, airlines have used sophisticated models to try to predict demand, so that they can price seats appropriately. The list goes on and on.
alexander hamilton (new york)
Please do not assume your readers are children. Banks do not gather data on you to save you "billions of dollars." They do it to make billions of dollars, for themselves. Hey- let's have the bank "friend" me on Facebook and see who my colleagues are and what my children look like. This will improve the underwriting process for a loan application? If so, then the loan officer and bank president should give me access to THEIR Facebook profiles, so I can better know them and decide if this is the kind of bank I want to do business with.

The simple fact is, large institutions want more data; they can never have enough. At the same time, they make up the rules to suit themselves, without regard to data. When banks want your money, caution is thrown to the wind and everyone qualifies! But when the chickens come home to roost (2008 and after), now "we need to get to know our customers better" again. Right.

Maybe there's an algorithm to tell us which (if any) of the people who believe, like Harvard's Gary King, that "the goal is not necessarily to have a human look at the outcome...." should be allowed to reproduce.
Martin (New York)
This is the new civil rights issue. Every person has an inalienable right to be judged by what she actually says or does, not by what an algorithm predicts she will say or do.
Patricia Riveroll (Ottawa, Canada)
Are we using algorithms to make life easier?
We have lost human touch, communication, empathy, family time, but most of all we have lost understanding and respect towards one another.
Cathy (New Jersey)
I keep thinking that if hackers really wanted to have some fun they would focus on disrupting "Big Data". Wouldn't it be fun if there was a harmless (to the individual user) program that could run lots of random searches on your computer just to confuse those who want to collect "Big Data" about the user?
Mr. Robin P Little (Conway, SC)

Here is my distilled wisdom, filtered through the messages of others, such as the science-fiction writer, Phillip K. Dick, the recent warning of the British physicist Stephen Hawking, as well as the agreement by Bill Gates, the former CEO of Microsoft, to what Mr. Hawking said about the dangers of computerized machine technology taking control of our world and our lives from us in the decades ahead:

Me; we are becoming enslaved by "intelligent" machine technology. Here is the rule of thumb: the "smarter" the machines we build for ourselves, the greater and the more insidious the eventual enslavement of mankind by this technology.

There is no escape from this future possible for mankind except by these methods: stop building "smart" technology, stop using "smart" technology, and start feeding incorrect, wrong-headed and self-contradictory information into such machines in order for them to become "confused", disordered, and in order for them to stop working at all.

The financial meltdown of 2008-9 was the direct result of computer 'quants' devising such fiendishly complex financial derivative rules in the prior decades that something as simple as people ceasing to pay their ill-gained mortgages cascaded into a world-wide economic depression. Don't say you weren't warned.
Eugene Gorrin (Union, NJ)
Of course we need humans - who's going to remove the plug, reinsert it and re-boot if the system freezes?
James J. Cook (Ann Arbor, MI)
Here as just about everywhere else in our culture, the fundamental question is ignored. When it comes to knowledge of any sort, the basic issue does not concern the logic of the system but the assumptions underlying that logic. Start with the Euclidean geometry of Newton, for example, in which space is assumed to be rectilinear and absolute, and you will never get to the Riemannian geometry of relativity theory. Start with the medical establishment's allopathic assumption that nature is the enemy, the human organism a machine and medicine a matter of engineering and you will never find your way to health and well-being. Garbage in, garbage out.
Val S (SF Bay Area)
Watson cheated at Jeopardy, having an unfair advantage due to the lag between the time a human knows the answer and can hit the buzzer. To make it a fair contest Watson should have had a similar lag programmed in. Watson might still have won, but it would have been a better contest.
johndburger (Boston, MA)
You're making some unwarranted assumptions about how the buzzers work on Jeopardy. They're not operative until after the host finishes reading the clue. At that point, the producers manually arm the buzzers, and simultaneously a light comes on, which the players can see. But human players are very good at anticipating all of this based on when the host stops speaking. They consistently hit the buzzer faster than is humanly possible if they were actually relying on the light. Watson, on the other hand, has to wait until it sees the light. Human players out-buzzed Watson quite frequently.
matt polsky (cranford, nj)
Is data and analysis really "more science;" or, is that itself an illusion, however comforting or not.

Which data and what interpretation of the analysis, and who decides?

Here's something I co-wrote on the overall subject: http://www.greenbiz.com/blog/2013/08/19/what-moneyball-can-teach-us-abou...
Nanj (washington)
IBM's Deep Blue beat Chess Grandmasters - Game Pretty Much Over;

IBM's Watson beat Jeopardy Champions - Game over!
Michael Levine (Topanga, CA)
"Siri, I said I wanted to know if Texas was a red state, not where to have sex with a dead snake."
MikeM (Fort Collins,CO)
Showing the story behind the algorithm's logic process is very helpful for training and for tweaking the logic. Humans understand stories far better than numbers and percentages and multi-syllabic words
Kenneth J. Dillon (Washington, D.C.)
Search engine algorithms seem to be utterly incapable of spotting an original, correct qualitative analysis of a complex problem. Of course, many humans are similarly incapable!
James Igoe (NY, NY)
As a software developer, I see how often the errors in coding cause problems for people, so although a perfect algorithm might make better decisions, those base algorithms were initially coded by a human being. IBM might be able to reduce these type of errors, but there are many smaller, less competent firms out there.

Also, if a human makes a bad decision, it impacts one person. and if generally incompetent, might harm a string of people. If an algorithm makes bad decisions the scale is much larger, since it will make the bad decision many times over, applied to many more people.

As a field, software development is becoming larger, with many small players. In the abstract, algorithms and big data can bring great insight, but it will be implemented by human beings and that will be flawed to potentially disastrous effect.
Steve P (Southlake TX)
In my dissertation research, 30 something years ago, I had a data set of n=15. Today you might have 15 million or 15 billion observations, but successfully analyzing data still boils down to asking good questions and developing good theories. Bigger is not always better.
Barbara Duck - The Medical Quack (Huntington Beach, California)
One limitation with algorithms is they don't do ethics and that's purely human and a big part of our society...

http://ducknetweb.blogspot.com/2014/04/limitations-and-risks-of-machine-...

Correlations too are distinctively human as well.
Charles Packer (Washington, D.C.)
Actually, the algorithmic mind-set is intruding into some areas
where humans are still doing the work. Take hiring, for
example. In my occupation of computer programming, pre-interview
tests of technical proficiency are now the rule. In the old days
the manager for whom I would be working would conduct the interview
himself. He'd know if I had the right stuff by my responses to a
few leading questions. Nowadays, some young twerp, often new
to English, administers a pencil-and-paper test with questions
that amount to the software equivalent of declining "hic, haec,
hoc."
The Scold (Oregon)
There are so many ands, ors, ifs, and buts in any set of circumstances complicated enough to warrant the use of computer algorithms that the idea is ludicrous. The instances where programed algorithms produced garbage or conclusions so simple that their use was of no value are legion.

To me the pursuit of artificial intelligence is also ludicrous. Call me a ludite but I fail to see the appeal of us all becoming robot wranglers.

Consider the production of this newspaper, algorithms?
John H Noble Jr (Georgetown, Texas)
Maybe this is a better way to reduce the burden of excess administrators in health care, education, civil and criminal justice, human research protection, scholarly peer review, college admissions, and--most especially--the selection of politicians. As Watson gobbles existing data and points to optimized decisions, there is no reason why it cannot generate questions as it interacts with the data to increase the reliability of its predictions. In the interrogation mode, enhancing interaction between machine and the people whose data are being used may well reduce the uncontrolled bias of human decision makers, leaving for explicit scrutiny whatever bias is built into the algorithms themselves. Instead of accepting the authority of faceless bureaucrats, we consumers of society's goods and services can live our lives knowing not only the rules of the game but also that they will be applied even-handedly.
frazerbear (New York City)
The algorithm process appears to be premised on the assumption that our personal computers and cell phones will forever be free billboards for whatever spam they decide to spew. Hopefully at some point Congress will make changes in the law. If they want to invade my private space, they should either get my permission or pay me for the access.
Tom (Midwest)
The issue is not new. Those of us who have written and reviewed algorithms and computer models for a living understand this. The human at the front end has to make decisions (and assumptions) in many cases. Bias can be introduced at this point. The second question is testing and revising the algorithm against the real world. The last question is reviewing the decisions of the algorithm and a human making the final judgement. Analysis of big data often depends on pattern recognition, probability and statistics. As I learned from those classes and actual practice, recognition of the limitations of those analyses is something an algorithm cannot do.
Arnie Tracey (Ottawa, Ontario, Canada)
One clear benefit of well constructed algorithms might include a reduction, or even the elimination, of racial bias of the sort which has historically had such devastatingly negative consequences for both minorities, society, and the economy as a whole.
Gary (Oslo)
Ads based on algorithms are more often than not useless. For example, after I buy a birthday or Christmas present for someone on Amazon, I'm inundated for weeks with suggestions for similar things that I would never buy for myself.
Stephen Beard (Troy, OH)
Or identical things. I wonder sometimes if a computer can make a connection between the purchase of one object and the need or desire for another. Instead, I am left wondering what kind of bozo would believe that buying a pair of specialized gloves means I am interested in ten pair of identical gloves.
andyreid1 (Portland, OR)
I can remember taking a class on "Logic" in college, the first thing you needed to do is throw out anything that appealed to emotion as it had no place in a logical argument. And so algorithms do the same thing.

As humans emotion tends to be involved with how we make our decisions. Algorithms don't have emotion, at least not yet. On eBay they knew I was a record collector and their best picks were the Beatles and Michael Jackson, FYI I never bought a record by either on eBay. Almost all the picks for me on the NYTimes tend to be articles I've already read.

Emotion is the trickiest part of the equation. Trying to decide when to put one of my dying cats to sleep I wouldn't trust to a machine. Unfortunately algorithms were created in a capitalist world where cost is often a hidden factor. Did car companies avoid recalls because the algorithms said it wouldn't matter? Would you rather have a human doctor talk to you about end of life decisions or let the HMO's algorithms make the decision?

I say no to algorithms, but hey I'm only human.
Dr jb blanc (france)
Do not forget that when decision is made by "artificial intelligence" it gives a whole opportunity to manipulate the decision. The main concern for health is that Big Pharma will manage to handle the algorythms.
jimjaf (dc)
if algorithms know everything, then there isn't need for human intervention. but that's a mighty big if and, so far, they simply don't. denying someone a loan solely because they live in the wrong zip code may make some business sense for the lender, but doesn't serve the public interest and leads to broader bad outcomes, like encouraging people worthy of credit to move out of such areas, making them even more depressed.
Laer Carroll (Los Angeles, California)
Machines are good at routine. Humans at creativity. How to marry the two usefully is the challenge of the century.
Grossness54 (West Palm Beach, FL)
Can 'algorithms know all'? They can 'know' only what information they're given, and then take that along a logical tree; 'false' or '0' goes one way, 'true' or '1' goes another. They might be designed to handle a lot of information, leading to a huge number of possible choices, but in the end it all comes down to a variation on the job that pays 1 cent for the first day, 2 for the second, 4 for the third, 8 for the fourth, etc.; it doesn't take long to realise that if you can stick it out for a month you're set for life. That's why people stand back in awe of The Great Algorithm. It's that sheer number of possibilities that's overwhelming.
Of course, it's easy to forget that other eternal truth of computing, that acronym 'GIGO': Garbage In, Garbage Out. One bit of 'bad' (false) information and you're off in one of many wrong directions in that logic tree that just leave you out on a limb. Since ensuring that every last bit of entered information is absolutely accurate is realistically impossible, just what can you expect to get from a complete reliance on algorithms? In the immortal words of Marx (Groucho, that is): "Three guesses, and the first two don't count."
REB (Maine)
As I recall from Gamov's "One, Two, Three, Infinity", that progression would yield $1 x 10 exp 17 on the 64th day alone much less the accumulated income for the previous days. As for the last quip, in Jr. High we always used to say, "Three guesses and the 1st three don't count".
EarthMom (Washington, DC)
Data is only as good as the people who decide how it's going to be used.
April Dunleavy (New Jersey)
And algorithms are only as good as the data used in the analysis. The data point that my mother-in-law died over a year ago doesn't seem to be considered by the algorithms banks use that still send her credit card applications.
ERP (Bellows Fals, VT)
IBM's program Watson is searching for medical "insights". But how is an insight even defined outside the context and background knowledge of humans?

And the article fails to mention the most ominous of presently touted developments: the "driverless car". It seems that only when a computer glitch (and all programs have them) causes a fatal accident will people become aware of the implications of this. The legal consequences alone will be a spectacle.
andydoc (NYC)
It’s high tech profiling, and the potential for hidden agendas in a “scientific” black box are concerning. That is the real issue for me, not the question of the need for human overseers for ongoing individual outcomes, but how the algorithms are designed by people with corporate or political agendas in the first place.

“In banking, for example, an algorithm might be tuned to reduce the probability of misclassifying a loan applicant as a deadbeat, even if the trade-off is a few more delinquent loans for the lender.”

I will bank on the likelihood that the algorithm ultimately chosen will be weighted to actually increase the probability of “misclassifying” loan applicants. It’s the bank’s fiduciary responsibility to deny more deadbeats at the expense of a few unfortunate false positive loan seekers.
Jeff D. (Omaha)
Really ? Algorithms are computer heuristics designed by humans for use by computer software to assist humans in making technical decisions, they are only as good as their implementation. The 'opacity' of data science is simply from the outside and those of us on the inside understand it quite well. There are limits to the levels of complexity that can be calculated by a computer in a short enough span to make that decision useful in real time and under all circumstances.
Bob (SE PA)
The tone of this article suggests the writer assumes the existence of a universal right and a wrong answer in the debate, and gee whiz shucks it sure feels better when human decision-making is at least part of the process.

The good news that the various scenarios are quantifatively testable: Whether the goal is to maximize a financial, a marketing or a medical outcome, we can pit Algorithm A vs Algorithm B vs. Algorithm A + Human Judgment vs. Algorithm B + Human Judgment, vs. Human Judgment alone, and if the experiment is well designed and randomized in design with sufficient N (sample size), hypotheses can be disproven at the 99% confidence level. The good news for fans of human beings: Human experts must design the systems, and the experiments proving the efficacy of those systems. And once a superior methodology is found, humans must sell other humans on the idea of adopting the proven, improved technique. And when this is done, those same humans can begin the process of identifying new candidates for still greater improvement, and the cycle goes on!

The writer seems to lament this brave new world, and by implication Science. I do not lament the advent and use of the scientific method in business and medicine. After all, science has taken us out of the slime and allowed us to advance as a species. Ignorance of science threatens to destroy us.
Stephen J (New Haven)
Once you have enough data bearing on a particular decision (such as: "will this person buy a new car this year?" or "would surgical intervention improve the odds for a full recovery?"), a simple algorithm will generally outperform human experts. This is a "sad but true" fact, first put forward as a serious hypothesis by Paul Meehl in the 1950s and supported by lots of research ever since. Human judges can make contributions! We can perceive things in one another that cannot (yet) be perceived by mechanical systems. These observations can be entered into the algorithms. We can also work with unusual situations where we just don't have enough data to make a statistical prediction. And cumulatively speaking, those are very common! But when human judges second-guess the algorithm when it is being applied correctly, we are more likely to make changes in the wrong direction.
Nick (Oregon)
There are two factors that you aren't considering (1) the "garbage in, garbage out" problem where people who learn to game the algorithms will gain an advantage over the general public and (2) disruptive trends that occur outside of the data history.

An example of the first factor is when people gamed the credit-card system by paying credit cards off with larger cards until their credit scores were bulletproof (a problem identified in hindsight). A new version of the same tactic might see someone join chess clubs on Facebook so the algorithm will qualify them for a loan. Or people who aren't on Facebook (like me) will get rejected based on an insufficient data history. (join us... join us...)

The second factor can occur in a field like the housing market that grew for 30 years before having a correction. I can easily imagine an algorithm trending prices upward forever. How would an algorithm look at the California water drought, without human assistance? It's effects are going to hit like a sledgehammer when the water finally goes (next year? I hope not). Was it a computer or a human that made the "aquifer levels"/"almond prices next year" connection?
Gert (New York)
I don't quite get Mr. King's recommendation. In the case of diagnosing a patient in an oncology setting, would he suggest that the algorithm be biased toward diagnosing cancer, even if that means that a few people might get chemo unnecessarily? Or toward a negative diagnosis, which might mean a few people dying of cancer because they were left untreated?

Even though computers can analyze evidence much more quickly than humans, they will never have perfect information. Therefore, ideally, when Watson makes a "diagnosis," it should include confidence levels. For example, it should say "there is a lot of evidence in this case, so I believe that this patient has cancer with 95% confidence" or "there is very little evidence, so I believe that this patient has cancer but with only 55% confidence." In the former case, human intervention would likely be minimal, but in the latter, a doctor would need to pay close attention. Isn't that obvious?
DeHypnotist (West Linn, Oregon)
This gets back to an old, ongoing thread within philosophy and the social sciences -- empiricism vs. the humanist tradition, in which context and meaning are held to be critical to understanding human action. For one thing, the data on which algorithms tend to operated is usually based on what occurred - e.g., website hits, scanner data from sales - and does not take into account how the action was constructed. Without getting into the philosophy and sociology of knowledge and science, what I am trying to say is that the premises and assumptions underlying faith in big data cannot be taken for granted as "givens." Just like quantum fizz, human behavior gets harder and harder to pin down like an insect collection....
John Graubard (New York)
I recall that in the early days of NORAD the early computer almost started World War III - by mistaking the rising moon for a massive missile launch by the former Soviet Union. In any event, reducing everything to a numeric computation also reduces people to numbers, and from there it is an easy step to simply "disregard" the inconvenient numbers.

Finally, remember that any computer program would have rated Bernie Madoff as a financial genius in the summer of 2008!
Gert (New York)
The computer never almost started a war--decision-making power was always in the hands of humans. It wasn't WOPR.

Also, keep in mind that human eyesight is hardly infallible. There are plenty of examples throughout military history of inflatable rubber tanks and other visual decoys, for example. Humans' inability to see what was in front of their eyes has led to several actual or near national catastrophes, from the legendary fall of Troy to the very real Cuban Missile Crisis. As long as we recognize computers' limitations, they are not necessarily "worse" than humans at what they are designed for.
SteveRR (CA)
I think you have watched too many bad Matthew Broderick movies.
Andy Hain (Carmel, CA)
I find it hard to believe that most any computer can not be programmed to recognize results that are "too good to be true."
Tony Longo (Brooklyn)
The reason for human involvement in the algorithms of decision making is that decisions affect other people, often to their disadvantage. Framing the discussion in terms of "sales strategies", as if there were no harm done anywhere, is misleading; but introducing the subject of misclassifying loan applicants as deadbeats touches on more real issues, and is just the tip of the iceberg. If algorithms are permitted to create individual cases of human disaster, without the personal involvement of any decision-maker, there is no responsibility taken by anyone. "Tuning" systems to let in more loan-seeking deadbeats is not the answer - responsibility must be taken for individual cases, not for "policy." In the scenario advocated by this columnist, the only point at which a decision affecting human responsibility is taken is when human involvement is removed from the process - at that point, the humans designing the process abrogate their personal responsibility to maintain accountability to other humans.
jzu (Cincinnati, OH)
Data has its advantage at correlation. Human may or may not do better in causation. I write "may". An example would be climate science: Climate science is essentially a big data analysis project. Scientist and politicians alike derive the underlying mechanisms or causation. It is interesting to note that the causation arguments, despite the data, are widely diverging across the political spectrum. This would show that there is a limit to people adding value to the data.
Another example is in healthcare. Any pharmacy advertisement states that people should consult their doctor before taking a pill. The most likely cause is not that doctors judge better, but that legal practices have figured out that falsification of a claim based on sparse evidence can be lucrative even though perhaps the data correlation is clear. The recent vaccine debate is a great example.
Perhaps what perturbs humans most when relying on a data algorithm: They cannot look at the algorithms body language or see it into the eye and say: "I trust this man or woman". The algorithm lies beyond our intuitive framework of trust although it may be superior.
minderbender (Brooklyn)
I will quote from Nate Silver's "The Signal and the Noise":

"The [National Weather Service] keeps two different sets of books: one that shows how well the computers are doing by themselves and another that accounts for how much value the humans are contributing. According to the agency's statistics, humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent. Moreover, according to [Jim] Hoke, these ratios have been relatively constant over time: as much progress as the computers have made, his forecasters continue to add value on top of it. Vision accounts for a lot."

Experiences like this make it EXTREMELY hard to understand the sentiment that "more science, less gut feel and rule of thumb" will improve forecasting. To argue that humans introduce only bias, and not predictive power, when they interact with computer models is to ignore the data in a fairly shocking way - one might even say, in a way that reflects gut feel and bias, rather than respect for the data.
NA Fortis (Los ALtos CA)
This analogy has percentages bauried in it somewhere. It is admittedly a stretch. But consider long-distance air travel: the senior pilots of the major carriers will probably admit that they get an awful lot of money for (help) taking off, (help) landing, and hours of mostly sitting.

But now and again an event occurs that trumps all the automatic controls, and the possible over-paid pilot earns three times-plus his or her salary in possibly under five minutes.

A nice "override" to have (no pun intended)

Naf
Stan Continople (Brooklyn)
If the humans and the machines disagree on an conclusion, it can go to arbitration - by an algorithm of course.
5barris (NY)
My thirty-five years of experience with computers informs me that they require constant human monitoring for glitches.
Arif (Toronto, Canada)
Recommended on the unquestionable basis 'Fact or Funny' of course!