How Artificial Intelligence Can Save Your Life

Jun 24, 2019 · 193 comments
Steve Fankuchen (Oakland, CA)
David Brooks is off base in two ways. First, any technology, any tool, always has and always will be used for ill as well as for good. The world's bad guys are just as smart as the world's good guys (as well as often more motivated), and to pretend otherwise is a self-deluding fantasy. Second, though he is usually adept at language, Brooks has forgotten that "artificial intelligence" is essentially an oxymoron promoted by corporate marketing departments to make their products seem to be very profitably more than they are, merely a current version of snake oil. Brooks, who usually is at his best when looking at the big picture and underlying human nature, somehow has let himself get sucked into the black hole of the Tech Savior Cult this time. Perhaps there are personal issues in his life that impel him to write, "But if it’s a matter of life and death, I suspect we’re going to [choose A.I]." If that is the case, I truly wish him the best and want to thank him for often providing me and many others, substantial things to seriously think about, even a new perspective at times. I would suggest to Brooks if he is, for whatever reason, newly concerned with life's meaning, that he take a look at "The Fate of Man" edited by Crane Brinton, an excellent compendium of thought on the subject through the ages, with an excellent introduction by Brinton.
Ama Nesciri (Camden, Maine)
If A.I. points out the G.A.U. (God-awful unintelligent) reality of our woebegone president and company, I'd be willing to concede there is hope for its place among us.
Lillies (WA)
The term "Artificial Intelligence" has always made me giggle: How about some real intelligence? You know, like something that's not in this editorial?
Craig (Portland)
The ability to verbalize thoughts? I bet prosecuters would love that device. Awesome, indeed.
Robert Henry Eller (Portland, Oregon)
Natural intelligence is the only thing that will save us from wiping ourselves (and much else) off the face of the Earth. Instead, natural stupidity and will ignorance appear to have the upper hand. The only way we're going to survive Trump and the Republicans is if AI can learn to flatter Trump better than Tucker Carlson, Sean Hannity, Kim Jong Un, Xi or Vlad can. And it better happen soon.
UWSder (UWS)
You mean David Brooks is a robot?
SystemsThinker (Badgerland)
This is bigly...... David, did the AI folks do a run on the Stable Genius? Use your influence do something for your country. Imagine what a complete AI work up would reveal about our AI President!
JL (Los Angeles)
Avoiding the tough topics of the day engendered by your beloved Republican Party and sacred "conservative values"? Columnist , or coward by any other name.
Galencortina (Hollywood)
All the impact of Brooks' political discourse.
Steve (Seattle)
Can you imagine hooking up one of these devices to trump, the AI would experience fatigue, confusion, lunacy.
Socrates (Downtown Verona. NJ)
Republicans have been successfully using artificial God-Guns-Greed 'intelligence' for decades to try to save the American 'way of life'. Unfortunately all it did was cause massive national brain damage.
John (NYC)
Curious. A.I. may be the very thing the scientific community...okay...maybe its fringe community, have been seeking for many years. A.I., too. Alien Intelligence. Because isn't this exactly what we may be creating? An Intelligence foreign to the human intellect, one informed by our own but with its own sensibilities? It may already be happening. Ask any A.I. coder. Odds are they will acknowledge that they are not sure how, or why, their A.I. "child" acts as it does. And it will be one which will view and analyze us as the animals we actually are. So that intelligence is being birthed. Like a visitor to a zoo it can look into our reality cage from between the bars of its 'Net space and, in seeing us as the hairless chimpanzees we are, animal after all, can tease out those things about us that we cannot see in ourselves. They can study us as Homo Sapien Sapien; or Homo Quirkicus. Alien Intelligence on the cusp of existing as....Artificial Intelligence. Would this be viewed as ironic by that fringe scientific community? Or a fearsome horror? In any case did someone say Brave New World? Yes, indeed. John~ American Net'Zen
Lisa (Maryland)
It is no surprise that someone as tone deaf to human suffering as Brooks would celebrate the replacement of empathic human listeners with machines.
Chris Manjaro (Ny Ny)
"You tell me if that’s good or bad." It's both.
stonezen (Erie pa)
Dear DAVID BROOKS, You should watch the NETFLIX movie :"I am Mother." https://www.youtube.com/watch?v=wm5F4Wj_fUE I'm concerned that AI will lead to a self aware machine(s). IN that case they save our lives now and later it is another story. Please go watch.
Walking Man (Glenmont, NY)
Imagine what the Russians could do with this in terms of election interference. But , hey, it could be some 400 pound guy on his bed in Jersey.
Kindle Gainso (New York)
The last time AI (computer analytics) hit was in 2008 because that was showing ($ $)
teoc2 (Oregon)
How Artificial Intelligence Can Save Your Life... your job—not so much
Ellen (San Diego)
This column about using artificial intelligence to diagnose disease states is dystopian and Orwellian. As things currently stand, doctors are forced into little boxes of diagnosis by BigHealthCare - the insurance companies like to pay for only this and not that. I feel sorry for the medical profession and I feel sorry for us if this all comes to pass. When did we vote to have artificial intelligence anyway - and who will be in charge of it? Billionaires, most likely.
Karen (MA)
AI=$$$....'nuff said.
Steve Fankuchen (Oakland, CA)
David Brooks, who usually is at his best when looking at the big picture and underlying human nature, somehow has let himself get sucked into the black hole of the Tech Savior Cult. Perhaps there are personal issues in his life that impel him to write, "But if it’s a matter of life and death, I suspect we’re going to [choose A.I]." If that is the case, I truly wish him the best, want to thank him for often (especially in the last couple years) providing me and, I expect, a significant number of us, things to seriously think about. I would suggest to Brooks that if he is, for whatever reason, newly concerned with life's meaning in a very personal and not just an intellectual way, that he take a look at "The Fate of Man" edited by Crane Brinton, an excellent compendium of thought on the subject through the ages, with an excellent introduction by Brinton. However, as this is a public column Brooks has written, I will take the liberty to suggest that he is way off base in two ways. First, any technology, any tool, always has and always will be used for ill as well as for good, and the world's bad guys are just as smart as the world's good guys (as well as often more motivated.) To pretend otherwise is a self-deluding fantasy. Second, though he is usually adept at language, Brooks has forgotten that "artificial intelligence" is an oxymoron, promoted by corporate marketing departments to make their products seem to be more than they are, merely a current iteration of snake oil.
Steve (Maryland)
AI has given our providers the ability to record nearly every sneeze, cough or fart. Wonderful. Amazon invades my on-line reading with reminders of purchases past. Did it ever occur to them that they might be driving me crazy? Not enough to take my life . . . at least not yet.
Tamer Labib (Zurich (Switzerland))
The question is not whether this will be good or bad, rather “ are we ready to know more about ourselves ?”
George Shaeffer (Clearwater, FL)
I am adamantly pro-choice. The central issues of the pro-choice position are privacy and personal control over one’s own body. Carried to its logical conclusion, this gives me my own choice about my death. Whether or not I am going to end my own life and why is no one else’s business. I am far more concerned with what one could call “life panels” than with the fictitious “death panels” claimed by opponents to the ACA.
Steve Fankuchen (Oakland, CA)
There was a doctor related to the Reagan Administration (I forget the details) who concluded you could predict a person's propensity for future criminal activity by the shape of their ears. I believe she was advocating the Administration use that "fact" for some sort of legal intervention regarding the persons so identified. What Brooks is describing here is essentially a refined, profitably privatized version of that.
sdavidc9 (Cornwall Bridge, Connecticut)
If the AI knowledge is used to make money, as Facebook and Google use our clicks to sell our attention to those who would sell us something, this is probably more dangerous than if government uses it to control people. Brave New World is more dangerous, because more likely to be stable and ongoing, than 1984.
Jacquie (Iowa)
According to Brook's way of thinking, AI should be put to work immediately to stop all the school shootings and other gun violence in the US. I doubt even AI can do that.
John (Upstate NY)
At what point does AI trigger some sort of intervention? Is that good or bad? You tell me.
Mike S. (Eugene, OR)
Maybe AI can finally tell us who should not have a firearm. That should be real interesting.
John Lusk (Danbury,Connecticut)
This might come in handy in legitimate gun sales.
richard cheverton (Portland, OR)
A remarkable column from Mr. Brooks. Insightful in many ways. But a couple of sentences really stood out: "At some level we’re all strangers to ourselves. We’re all about to know ourselves a lot more deeply." This has enormous implications medically---but also socially and politically. It will be utterly fascinating when AI sets to work analyzing political speech (might start with the incidence of the word "patriarchy" in the New York Times and work from there.) As the loudest voices among us move to the extremes, there is a sense among us onlookers that "we're all strangers to ourselves." A lesson worth contemplating.
PBJT (Westchester)
It is refreshing – and right – to see a conservative thinker who has a sense of balance. Brooks once wrote that he sees “the necessary skill of public life” as “the ability to see two contradictory truths at the same time.” He bookends today’s column with “you decide” antics that challenge us to roll in the ethics and possibilities of both sides. He invites us to make decisions based on facts, not just guts, which shouldn’t be as rare as it is. In this case, tough, I’m wary of the side that cedes to the inevitable rumble of AI. Brooks justifies this in the case of “life or death,” but were this application of AI to roll out, it would inevitably find less extreme applications. A walk in the woods with a friend, or an Imago therapy session would become … a neuron scan? Pope Francis said,” Decisive progress on this path cannot be made without an increased awareness that all of us are part of one human family, united by bands of fraternity and solidarity.” Whichever path we go down, we know that human connection is the thing that staves off the darkness. We have no choice, it seems, but to accept that AI is here to brush our teeth, park our cars, and get products off our shelves, but when it outsources human expression and the vulnerabilities that come with connection, then we are less human.
Blackmamba (Il)
Nonsense. The emphasis should be on the artificial instead of the intelligence. Arithmetic is not science. Mathematics is science. Biology and chemistry and physics are science. Algorithms and statistics are neither intelligent nor knowing nor smart nor wise. They are tools. Machines don't ' know' anybody nor anything. Any more than a recording of a human image or sound is ' knowing,' the person recorded.
goofnoff (Glen Burnie, MD)
Humans are overwhelmingly vain about free will, when in fact we are as conditioned a Skinner's pigeons. We are constantly bombarded with efforts to use or own subconscious to control us. It works or advertisers wouldn't use it. AI will just be a hyper fast way of reading our subconscious and correcting our conditioning. Employers can insert a little receiver subcutaneously that gives an electric shock for immediate negative reinforcement when our actions or words betray a straying from the corporate ideal.
Joseph John Amato (NYC)
June 25, 2019 Getting to know ourselves is beyond human expectations and as much AI - We as natural evolutionary progression tells us much about the assembly of the making of the human design - that we are all thankful too. What we do with this gift, given to us and my whatever design plan or dare are the nature of AI is much about a philosophical and editorial inflection towards an insight to the greater collective narratives of how we live and use our matter and indeed spirit towards solutions for kindness to self and others - so on the mass collective our judgement always need refinement as Mr , Books seeks and will surely give much further discourse and explanatory writing in his column and with thanks.
Jay David (NM)
It's true that no one knows her or himself. That why when the subject speaks, it is said to be "subjective", not objective. However, in real life the perceptions of the subject matter at least as much as the objective data. And although the machine may save my life now, some day, perhaps later today, perhaps tomorrow, I will still die, Mr. Brooks. We should spend more time caring about what our legacy to the world will be...because not dying is NOT an option.
Jacquie (Iowa)
What if all the AI is used against us by the Government, the states, employers, insurance companies etc? AI is moving fast but ways to assess its accuracy and uses need to be put in place.
LL (Boca Raton)
I'm an attorney and a data privacy law expert. I have a lot of qualms and misgivings about AI's use (and misuse) of our personal data. However, I fully support and embrace medical professionals using AI as a diagnostic tool. This is a beneficial and moral way to employ our technology - one that will improve health and save lives. I believe all medical fields will benefit from this. Carry on!
Sam Kanter (NYC)
Has David not considered the considerable dangers of relinquishing too much control and influence to AI? How it can be used by greedy or malevolent forces? I guess he’s not a fan of science fiction - which is becoming reality.
Justice Holmes (Charleston SC)
Sadly although AI is awesome it is also programmed initially by humans, mostly male, whose biases will not only taint the “calculations” but also the conclusions and perhaps grow more had wired and unbending. I’m happy for those for whom computer assisted therapies can provide help and some return to normalcy but I don’t want AI to be put in control of anything. Once it’s in control that control will grow and limits will be erased. Dystopia thinking, perhaps, but neither irrational nor unduly pessimistic.
Alan D (Los Angeles)
Much of what Brooks calls artificial intelligence is really just enhanced pattern recognition and large data crunching that computers have been doing since the beginning of the Information Age. But humans are more complex than the mere sum of their outward quirks and tics, and the ineffable human soul cannot be reduced to an algorithm, benefits of AI notwithstanding.
John (Ottawa)
I am all for this kind of innovation, as long as it's not all centralized in one data bank and it's used to manipulate me, endlessly.
CathyK (Oregon)
Interesting article, my husband and I just had this great argument about cancer and AI, he was arguing how we as individuals have no two set of fingerprints but yet we are all treated with the same cancers drugs like a crapshoot. Now with AI we will know exactly what we have and use the best drugs available.
Melitides (NYC)
A question: were there any mis-diagnoses, that is, subjects tagged as suffering from depression who are not actually suffering from any disorder. In the future, will the algorithms be the final judge of a person's mental health and will that person be required to take medication(s)? Many novels have been written based upon the abuses of psychiatric hospitals of the nineteenth and early twentieth centuries. AI will be the fodder of a Charles Reade for the twenty first century.
Village Idiot (Sonoma)
What the world needs is less Artificial Intelligence and more of the Real Thing.
Peter Liljegren (Menlo Park, California)
U.C. San Francisco, arguably the world's #1 medical research university, 30 miles north of Silicon Valley, is dedicated to expanding the limits on what is possible, so stated in their radio advertising. In economic jargon, they change the world's production possibility frontiers. Generally I trust our medical research universities to be more meaningful, trustworthy & wise than our Silicon Valley entrepreneurs. Example, if 'we' can articulate thought & emotion, 'we' in Silicon Valley can integrate smart sensor technology for the public display of these thoughts & emotions. Install these systems in San Francisco and NYC after-work 'dating bars' what will we get? Hopefully all handguns are left outside.
John Jones (Cherry Hill NJ)
DAVID BROOKS'S Article is a highly informative introduction to the world of Artificial Intelligence algorithms and its purported efficacy in identifying suicidality with greater accuracy than human observation. Using the results of AI screenings is but the first step. The next step is more challenging--that of compliance with the recommendations of the emergency worker. The behaviors described are largely unconscious. So those who contact suicide hotlines may not view the word choice and manner of speech as a part of their conscious self. After that, there is the challenge in getting mental health support. And most difficult of all, compliance. No doubt the AI interventions described are important contributions. But they must be viewed in the context of engaging the person calling for help, establishing a therapeutic relationship and supporting compliance of a treatment plan. Easier said than done. The question then arises as to whether moving through the next steps after screening to getting help could be machine based as well. I'm skeptical. But then I'd never have believed that virtual reality videos could be used to help traumatized military personnel to heal efficiently. That said, I think it's important to look around to see other models and their interventions and efficacies.
PE (Seattle)
It will not be long until people start to decode what AI looks Whatever the purpose, whether it be depression, risk management, whatever -- schools will form on how to fool AI. Maybe a future major in AI Manipulation from Harvard, Stanford.
RRI (Ocean Beach, CA)
This column should give Brooks readers pause, because his happy-face compassionate conservatism, his civic do-gooderism is evidently entirely compatible with welcoming a total surveillance society in which AI monitors the herd to prevent any of the corporate-state livestock from getting too depressed and ending their lives before completing their expected quota of online purchases.
J. Cornelio (Washington, Conn.)
Whoever is in control of a technology which eliminates secrets is going to have awesome power and something tells me we are a long way from having the wisdom to know how best to harness and use that power. With fear being our most primal instinct, that awesome power will almost certainly be used to eliminate "threats". It's a sad paradox that the technology which will give us the power to eliminate threats is likely to embody the greatest threat to what we have always taken for granted as human beings.
peter n (Ithaca, NY)
This piece starts off with the difficulty doctors have in labeling who is depressed and who is not, and then goes on to talk about how good A.I. is at categorizing people into those same labels. The problem here (perhaps only with the piece and not the underlying technology) is that the algorithm would depend on a human-generated training set, because 'depression' is not an objective thing, it is a concept we have created. Perhaps what the piece should be saying is that A.I. is much better at predicting suicide attempts. It could be that people who use one kind of language are much more likely to attempt suicide, while another group might be suffering just as much or more, but are more likely to stop short of taking their own lives, which is reflective of some kind of detectable pattern to their thinking. Suicide and depression are different, and solving one is not necessarily going to solve the other.
Don (Tucson, AZ)
The medical use of AI described is similar to an engineering function in companies I worked at, labelled 'integration'. While most engineering was accomplished by narrow specialists, balancing conflicting needs of those specialties requires less depth across a broad spectrum of disciplines. I expect the AI algorithms Mr Brooks describes will naturally fit into that role.
__ (USA)
I would rather die than let some Silicon Valley company make tons of money "saving" my life with A.I.
Luisa (Peru)
I think it was very smart of the Times to publish this side by side with Cory Doctorow’s piece. Again and again and again, it is all about democracy, which, in my very humble opinion, can only be defined as contained entropy.... I also find it fascinating that, precisely because as individuals we are so profoundly different from one another, to AI we are all truly peers. No preconceived generalizations work for an AI agent. Of course, it all depends on the algorithms...
TDHawkes (Eugene, Oregon)
@Luisa Yes mam, the algorithms can be like those that keep black people incarcerated or fire school teachers based on problematic assessments of teacher quality (https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction). However, if algorithms are appropriately monitored and updated as needed, they can be very powerful for precision medicine (https://ghr.nlm.nih.gov/primer/precisionmedicine/definition) because human health is affected by hundreds of variables that interact, which is too much data for current statistical methods and even human intuition.
Michael (Evanston, IL)
HAL 9000: "I am afraid I can't do that Dave."
Glenn (New Jersey)
Computers can be a tremendous help in many areas of life, no one can deny that. But none of the examples you give are AI. They are algorithms and massive database search programs that have been built by very smart programmers and analysts, but the programs are not intelligent and do not think. Even self-learning programs are not "intelligent", they are just refining their own database of knowledge by winnowing out false conclusions and dead ends. They are not thinking, just grinding away, burning a lot more energy than the human brain, coming to solutions of problems programmed by their creators. They can't come up with any problems of their own.
Unconventional Liberal (San Diego, CA)
Employers and the state will adopt A.I. faster than David Brooks can say "good or bad." If A.I. can detect "risk" (and it can), then employers and the state will justify applying A.I. to everyone in the interest of "safety". In the future, everything we say and write will be analyzed by A.I. algorithms, and that information will be cross-referenced with our DNA sequence. The end of freedom comes when our DNA and our thoughts become our probability and our fate. None of us like it, but we will ultimately accept it in the name of "safety."
stan continople (brooklyn)
@Unconventional Liberal And since these algorithms are completely opaque, you will never know why your life has taken any particular turn. There is no court of last appeal and eventually even the coders who wrote the routines will have died off, while the "fate engines" continue to churn away.
LL (Boca Raton)
@stan continople First of all, we already have federal law - the 2008 Genetic Information Non-Discrimination Act - which makes this parade of horribles illegal. Second of all, if you are on social media, you are already subjecting your written words to algorithms for targeted ads. I don't buy slipper-slope arguments. And, detecting "risk" can be a great thing when people are about to harm themselves and others. Nightmares and tragedies are prevented by "if you see something, say something," and AI can help "see something."
sdavidc9 (Cornwall Bridge, Connecticut)
@Unconventional Liberal Freedom does not involve just unpredictability; if it did, then sufferers from Tourette's syndrome would be among our freest. Mozart was free and unpredictable even though his music is also perfect in the sense that the listener hears that the music has to develop the way it does. Freedom involves mastery and creativity. Those who know or have a sense for how they are being manipulated are beyond the manipulation in that they can choose to accept or fight or fool or scam it (manipulate back at the manipulators).
Walking Man (Glenmont, NY)
On the other hand.....how many people will be put out of work by AI? We have not figured out what to do with all the people whose jobs will be lost to technology. You know all the people whose careers are gone, who have a reduced standard of living, and who are trying to find purpose in life. But 'we' aka a machine will be able to track them down. And make sure they don't harm themselves so we can offer them 'hope' for the future.
Jay Brown (Charlottesville, VA)
Can AI predict who is likely to become a shooter in a school or business?
hat (Tucson, AZ)
AI gave Stephen Hawking (1942–2018), the world renowned astrophysicist, the ability to communicate verbally and in writing (allowing him to author best-seller books, stay Internet connected, and give lectures around the world) after he became, soon after 1986, totally physically incapacitated, save a faint cheek muscle twitch. Though Hawking’s success in life was in essence AI-enabled, ironically “[he] cautioned against an extreme form of AI, in which thinking machines would take off on their own, modifying themselves and independently designing and building ever more capable systems. Humans, bound by the slow pace of biological evolution, would be tragically outwitted.”
RLB (Kentucky)
Not only is AI useful in treating the ills of the individual, but artificial intelligence will one day save all humans. For centuries, we have suffered from confusion and ignorance about our beliefs, which has caused us untold suffering and deaths. Now our beliefs threaten to destroy us all. In the near future, we will program the human mind in the computer based on a "survival" algorithm, which will provide irrefutable proof as to how we trick the mind with our ridiculous beliefs about what is supposed to survive - producing minds programmed de facto for destruction. These minds see the survival of a particular belief as more important than the survival of us all. Programming the mind in a computer will allow us to understand this; and when we understand all this, we will begin the long trek back to reason and sanity. See RevolutionOfReason.com
Joseph Lawrence (Worcester, MA)
Well, there you have it. David Brooks, the last of the humanists, has gone over to the social engineers, those intent on using algorithms and the like to keep us all safely numb but securely alive (as if securely alive and truly alive were even compatible). Brave New World here we come!
David Patin (Bloomington, IN)
Is it possible that the use of certain words indicates suicidal thoughts? That given enough data, patterns would emerge that predict attempted suicide to a high degree of success? Maybe, but I would have thought that too many cultural differences in how we speak and the words we use would reduce the confidence in those predictions. But one switcheroo in David Brooks’ column needs pointing out. He starts out talking about mental health and suicide then uses an example of word choices to indicate that “a person who uses words like “ibuprofen” or “Advil” is 14 times more likely to need emergency services than a person who uses “suicide.”” But emergency services is not synonymous with a suicide attempt. In the next sentence Brooks states “A person who uses the crying face emoticon is 11 times more likely to need an active rescue than a person who uses “suicide.”” Is that an active rescue following a suicide attempt, or a medical health issue? I wish I could trust David Brooks to not switch these terms just to make a column sound more dramatic. But I’ve learned from experience not to.
Anam Cara (Beyond the Pale)
Most depression is repressed trauma, long forgotten or dissociated. Some years ago, a neighbor locked himself in the bathroom and slit his throat. His mother and father were there in a rare visit to his family. His father broke the door down and saved him. Later, he told me he had a psychotic breakdown due to a recent promotion at work where he was having tremendous difficulty in his new role as supervisor of staff who were former colleagues. I suspect his near suicide was triggered by a more proximate cause and still remains hidden, even to himself. AI might be able to predict, but not uncover the etiology of mental illness.
Skaid (NYC)
So the Socratic dictum, "know thyself" is reduced to knowing how something other than us is measuring us? Egads...
AIR (Brooklyn)
Artificial Intelligence is a misnomer. It's just pattern recognition; quite a distance from intelligence.
Daniel12 (Wash d.c.)
AI can diagnose depression better than humans? That's rich. I suppose it would be too much to ask it to diagnose happiness as well not to mention prescribe a course to happiness. "AI, I see you have diagnosed me as depressed, what do you suggest I do to become happy? Acquire a billion dollars? Regress to being an infant? Imitate my dog? Have my own column in a major newspaper where I get paid as much as possible for the most repetitious and safest work possible? Turn to drugs? And who are the happiest people anyway and will they be happy if I attempt to do as they do not to mention attempt to force them to share in their happiness?" You can be sure the powers that be will be only too pleased to have AI diagnose unhappiness but will do everything possible to have it disguise their chortling behind your back. You get power's diagnosis of your state and you accept power's cure, or you find yourself very depressed indeed. How do powerful people cure themselves of depression anyway? More power? More money? More status, titles? Start a war? Acquire a property? I think the most promising outlook for AI, best employment for it, would be to become a comedian, the joker in the pack. I'd really like to see it become acute at diagnosing depression and happiness and suggesting cures for humanity. I can see powerful people everywhere fulminating against it, convinced it's becoming dangerously aware. "Why it's becoming as insulting as that English fellow, Oscar Wilde! It's Nietzsche! "
esp (ILL)
I don't want AI replacing my doctor. It cannot respond with human warmth, compassion, and even knowlegde. And the interesting thing is as an adult I can refuse any treatment the doctors order, even if doing so may lead to my death (a form of suicide), but I am not actually allowed to commit suicide. The worst thing that can happen to me is I can end up in a nursing home where I will be unable to make any decisions for myself. Some stupid machine will have control over me. And in a nursing home, my life might not be any better than those children at the border. No toothbrush, dirty adult diapers, bathing once a week, lights on all night. Rude insulting behavior.
drollere (sebastopol)
wow ... somebody drank the AI koolaid. i won't even bother with the various statistical and logical gaffes -- "people who commit suicide do this" -- yes, and people who don't commit suicide do this too. you left out the small matter of probabilities versus facts. also, point of fact: AI does not collect data -- devices such as smartphones do that. but hey, in the heady atmosphere of aspen: colon, where ideas are being described primarily to be sold: funded (here at aspen: acquired) -- it's pretty intoxicating, what these guys claim they can do! "Mindstrong is trying to" -- yeah, but what can they actually do, today? they can sell the promise of a mezzanine round of funding. (Look at the press releases on their web site.) here's a prediction that you won't need AI to validate: corporations like the promise of AI, will develop AI, patent AI, own AI, deploy AI ... corporations will profit from AI. and you? here's looking at you, kid. when i was a child with pneumonia, our family doctor visited me at home, at night, to diagnose and administer care. and you think of AI as a "promising future"! i think of it as one of those temple grandin cattle chutes ... you know, those clever entrances to the abattoir that the herd instinctively finds reassuring -- so they amble into the shambles. speaking of ambling: where are we all going, and what do we expect to find when we get there? nobody i ask has an honest answer.
The Poet McTeagle (California)
AI could perform all sorts of medical miracles, but in our for-profit medical care system that is trending towards complete unaffordability, its cost will bankrupt all but the very wealthy.
Doug Fuhr (Ballard)
It's fine to praise the marvels of computation and processing power; it really is amazing. It is also fine to call the result Artificial. It is. It is not fine to call the result Intelligence. It is misleading, and the more we do so, the further we get from understanding what Intelligence is.
B Miller (New York)
Brooks writes “Three-quarters of patients taking one of the top 10 drugs by gross sales do not get the desired or expected benefit.” I wonder how this is being measured. I would be concerned about fakes. Wasn’t there a NYT article recently about medications that were tested and did not have the correct ingredients or amount in them? Also, patient compliance may be an issue too.
Paul Madura (Yonkers NY)
Tools (like gunpowder) can be used for good or evil purposes. No amount of government regulation will undo this fact. And evildoers will uses A. I. for their own purposes, laws or no laws. Is the risk worth it? In my opinion, a resounding yes. But I am worried about the negative impact of the technology. Everyone should be concerned. But intelligent concern is a far cry from unneeded paranoia.
Bob Woods (Salem, OR)
Cool, but nothing suspends the law of unintended consequences.
MrC (Nc)
Mr Brooks says Primary Care physicians can be mediocre at recognizing if a patient is depressed etc. In most Primary Care Physician visits the Dr never lets go of the door handle whilst seeing a patient. The need to overbill because insurance company disallow / discount so heavily the net payment only covers about a 4 minute consultation. But typically < 40% of employees with health insurance actually have a regular Primary Care Physician visit, so most conditions are not even seen before its too late. And why - because modern high deductible insurance plans encourage people to avoid wellness / preventative care visits. Most health insurance covers only catastrophic situations after a high (often $10,000) deductible has been met. Insurance doesn't protect the patient - it protects the hospitals from uncollectable debts.
Richard Fried (Boston)
We need to develop a legal framework with very strong protections against abuse. There needs to be severe consequences for people who harm others by misusing their personal data. Right now their is very little oversight and almost no real protection from data abuse. Lets keep in mind that almost every large corporation has been involved in immoral and or illegal actions. Yes...it is expensive to build in safety features. Would anybody want to drive a car with no safety features?
Rich (St. Louis)
Brooks' column spurred me to read about the experiment. It's much less impressive than it sounds. For starters, there's really nothing AI involved. That is to say, no computer is thinking, or coming close to it. People were told to talk, and when they spoke certain words the parts of the brain that lit up were recorded. Then a program was designed to translate whenever a given part of the brain lit up into the corresponding verbal sound. It's mimicry. Not AI.
Sam Sengupta (Utica, NY)
Many thanks to the columnist David Brooks for such an insightful article. A.I. and its twin Machine Learning are here to stay to ensure a richer and a more equitable life for humanity based on science and technology without the unnecessary hype society has hurled into it. Whether A.I. could be harnessed to control humanity politically is an issue for society to decide; it is not an ontological issue for A.I. Secondly, Mr. Brooks presents one picture of A.I. in its role in Online Healthcare system. And such a system requires ‘smart-buildings, smart-communities, and smart-cities’ to make it really successful through system integration. All these issues need to be talked about. For instance, to see whether a community is heading for an endemic state long before it manifests, we need to integrate individual results into a group profile periodically to see patterns usually missed. This level of integration is the cornerstone for A.I. in healthcare.
RosanneM (HoustonTx)
Reading thought the comments it occurs to me that medical use of AI is judged, as are most ideas, as something to be feared by pessimists but something to be celebrated by optimists. I love my rose colors glasses!
Mark (Ohio)
"Medicine is hard because, as A.I. is teaching us, we’re much more different from one another than we thought." We live in a very highly nonlinear world but our tendencies are to look at problems linearly and want to distill things down to a single parameter. Curious and insightful individuals know about these nonlinearities and often AI systems are designed by these kind of people.
Steve Bolger (New York City)
@Mark: A.I. is an information sieve that recognizes related patterns. We exist in a Hilbert Space of potentially innumerable dimensions that may coexist orthogonally. Most people do focus down to very few dimensions of relevance to their emotional state.
Rich (St. Louis)
So AI can diagnose depression, sickness, and other maladies...great. The real issue is, do we have the resources and money to treat them. Not with this healthcare system we have, no matter how much we hype AI.
Michael (Nova)
As one who has been involved in the AI business in healthcare for years, I can assure you that any AI algorithm used for predictive purposes is only as good as the data used to train it...and we are (at the moment) woefully lacking in enough real, long-term, health outcome data to make the statistical learning/machine learning applications generally useful.
Stovepipe Sam (Pluto)
Definitely a slippery slope in several ways - can that kind of power to surveil be abused, most likely. Can that kind of power be bungled, most likely. Knowing this, does the application of that kind of power become a numbers game, diminishing some individuals' freedom to ill effect while helping others? Most likely. Maybe future generations who have grown up with this kind of invasive tech will not blink an eye at it, like breathing air. But, there is definitely a creepy factor to it. Is there a system of checks and balances to make sure the power isn't abused/bungled? Maybe a FISA court like a method of approving and monitoring the application of AI such as this?
Roger H (Washington)
Generating speech from thoughts is indeed awesome for the disabled. It’s also awesome for law enforcement. No pesky Miranda rights. Anything you think may be used against you in a court of law. Wonderful!
FunkyIrishman (member of the resistance)
Like ALL things in regards to medicine, the health care system, or just trying to get help, it is a matter of access. I am all for anything that gives a better diagnosis while doing it quicker (or in a millisecond), if like all things, it does not become a two tier system with all poor people having their depressed faces pressed up against the glass. Health care is a human right. (even if it includes A.I.)
Tom Meadowcroft (New Jersey)
Thank you David, as always, for doing the reading and summarizing that I lack the time for. . Yes, new technology will have uses that are for good and for ill, as has always been the case. Our best hope for using it well is if we are well informed. We should approach all gee whiz science with skepticism as to the quality of the claims, and the unintended consequences, but we must continue to approach it. The legend of Pandora's box still applies; there is no going backwards with new knowledge.
Nikhil Pathak (Augusta ME)
Dt Topol has been a good orator and sales pitcher for the use of AI in the medicine for long time. He may be right that in many fields use of AI will facilitate the providers and patient like in oncology with myriads of protocols and drugs to choose from, , or primary care where lack of sufficient staffing hampers everyday care especially in rural and poor urban areas. However, putting every encounter that has required emotiona, intellectual l and emphatic communication with patient or family members, simply too far fetched to assign some "numbers " and come with a working diagnosis or treatment plan. It will be only as good as initial data put into the repository. It will be erroneous to assume that every person wanting to commit suicide or self harm, will take time to write a text message -It will require all the tools available, including the verbal cues to understand potential people at risk. I also think that Mr.Brooks somewhat underestimates many fine primary care providers in far away places who are quite capable of understanding and either helping with initial management or referring to mental health providers for patients they encountered who have 'depressipn' or like illnesses. I am no Ludite, and welcome the AI in medical field,, but one must caution the hypes and mad rush, as profit seeking venture investors waiting in the corners!
joe Hall (estes park, co)
Here we go again a new technology we can all get excited about and of course the articles will ONLY print the good it can do however as we have seen %100 of the time new technology is ALWAYS used against us always and it's done by the police always. AI is going to destroy the majority of humanity regardless of it's potential to do good. We are doomed if this is allowed to go on unfettered.
Mary Jane Timmerman (Virginia)
I suggest that David Brooks read Margaret Atwood’s brilliant, speculative fiction trilogy: Oryx and Crake, The Year of the Flood and Madd Addam. AI, for all of the promise that it offers, harbors the opposite as well; the chance to provoke malevolence. Think about other positive inventions that have brought about negative consequences; antibiotics and plastic. Our oceans are polluted with a product that doesn’t biodegrade and we have new strains of antibiotic, resistant organisms we have no defense against. Fools rush in. AI, in my opinion, needs to be vigorously discussed by ethicists, philosophers et al.,before being allowed to advance any further.
Amanda Jones (Chicago)
Interesting...wondering if medicare would categorize these procedures as "medically necessary."
AndyP (Cleveland)
What David is calling AI is also called "Machine Learning". It is essentially a collection of statistical inference techniques, many of which were invented by computer scientists. There have been some impressive advances, but there is also a tendency for promoters of the field and for the press to hype them. They are not what most people would call "intelligent".
David (Little Rock)
I've worked in tech for the past 40 years. My opinion of AI is that it is like every tech we develop - it can be used for goood or bad. In the end, all it does is amplify our abilities though we might not see the outcome it generates immediately. My favorite example is the wonderful physics developed from Einstein forward which helped us understand so much more of our universe also led directly to the development of the results in Hiroshima. Today, the rate of change is far faster than that. The tech far outstrips our wisdom and has for some time.
Lake. woebegoner (MN)
Machines have been reading our bodies ever since, well, how about the cold-on-the-skin stethoscope. Does it work? Well enough that it sure beats having the doc put his or her ear on your chest and back, and telling you to breathe deeply. In a sense, A.I. is built into most diagnostic medical devices today. The numbers these machines come up with correlate highly with the diagnosed diseases. They are not perfect predictors. The same would be true of A.I in identifying. The computer output would show correlations with those previously diagnosed as depressed. The early Greeks called such behavior phlegmatic, as opposed to sanguine. Later, those correlates are now often called "type A" and "type B." Doctor's offices everywhere now ask patients if they are feeling depressed. Does it help? Only if the depressed reaches out for help. Those who do not seek help often hide their depression, not wanting to feel even worse than what they are already hiding. The only one who really knows what depressed means are those who are. Finding that out after their suicide is no answer. We need more help in finding depression before it becomes the "bell jar." That device helped save divers. Why not A.I.?
rich (hutchinson isl. fl)
I would rather have IBM's "Watson" diagnose a medical condition than 99.9% of the doctors in the world.
Sara (Oakland)
In a world where people in great distress have no access to well-trained mental health clinicians, A.I. Diagnoses may be better than nothing. But identifying a suicidal texter is far from providing efficacious care. The longer term relational engagement, where a person suffering in a somewhat inchoate way meets with a therapist is both diagnostic and therapeutic. Just diagnosing suicidality cannot be confused with treatment. After a text is identified as ominous...what? Automatic involuntary hospitalization, mandatory medication, an appointment with..whom? Yes, mental health diagnosis and treatment is difficult, messy, inexact. This has repeatedly created bad desperate solutions like insulin coma, lobotomy, now ketamine infusions. A.I. May inform clinicians of tell-tale patterns, but a human interaction will always be crucial sound care.
Sumac (Virginia)
I would welcome more intelligence, artificial or not, in the American health care system.
Juliette Masch (former Igorantia A.) (MAssachusetts)
I’ve never imagined I would encounter this kind of topic from Brooks, of whom I always tend to think as moderate and considerate. But, that may not be true anymore. Or not as double negative to be not necessarily all or nothing, as Brooks himself nuanced it in the end? If everything which I write here appears to be out of lexical normalcy, why am I not seen as one of the medical subjects for diving myself voluntarily into the category in question? AI into brains Is a nightmare. The plausibly positive effects would shift to cognitive and memory transfers for the dystopian market in the future, which will flourish for exclusive customers/clients. If this sounds too pessimistic, surely I would become more the subject. That facial expressions and physical movements in detailed observations and lexical oddities can tell depressions needing an immediate care with urgency is a claim on the medical side only, the probability of which has never attested on a human level for the humanity as a whole. Then, the challenge may be this question. What is human? One answer is the ability to say No to the world-would-come in which science only decides who you are.
Judith MacLaury (Lawrenceville, NJ)
Let’s extend this notion to learning. We all process experiences differently and create mental models differently and yet we have one primary approach to learning support we call teaching. If we varied learning support it might be possible to more effectively get this AI depression information to a greater number of people more effectively.
ML (Ohio)
One concern about AI and healthcare is that companies will try to convince us that their algorithm is the key and we should pay for that information, but without well designed studies we won’t know the actual impact on health. A good example of this problem is something wrongly hyped in Topol’s book. He looks at devices that measure personal blood glucose reactions to specific foods. He notes that cheesecake had a low glycemic response while strawberries were high in his case. Is this information useful? We really have no idea without clinical trials and much more research. Are small short term glucose increases important in otherwise healthy people? What about the impact on other factors like weight, blood cholesterol, and blood pressure of eating the Lowe foods on your list? What about the impact of other things eaten with those foods - since we eat meals, not single foods? If one followed a diet based on the individual glucose response of single foods, what would the impact be on their actual health? I fear companies will try to sell us this “personalized medicine” before we understand the implications.
Tom (PA)
AI - the same thing that allows advertisements to suddenly appear on my iPhone or Facebook when I did not search for a specific thing on either of them. I suspect the data gatherers know more about me, and you, than any of us realize.
Tim Barrus (North Carolina)
If a thing can be exploited by a member of the human species, it will be exploited by a member of the human species. Why are you here. In life. We never ask that question. Let alone answer it. Employers who do not hire depressed people will be more successful than employers who do. What is success. Success will be what AI suggests it is. Success is data. Why are you here. In life. I would never, ever even use the word suicide in the context of a medical setting. The legislature in the state I live in currently has an assisted suicide bill in front of it. The legislature will have to choose. But would I ask anyone in the medical community for an opinion. No way. An ambulance would arrive. "Sir, you have to go with us." Am I being overly dramatic. No. It happens every day. Who will be selling AI. What are the costs. Knowing ourselves a lot more deeply is a contradiction in terms for a culture internally obsessed over what appears to be a community of vacuum. How do we put ourselves and our riches on Instagram if our actual values transcend survival. No one on Instagram is depressed. Smile for the camera. The camera and the context is everything. AI is used to move your selfies to the cloud. Could AI roam all the clouds and make conclusions about what is photographed vis-a-vis massive data. It already does this. It's nothing. Why are you here. In life. We are AI. And we are afraid.
laurence (bklyn)
Three points: One is that suicide is a personal choice and has been throughout human history. If I remember properly there is a suicide in the Story of Gilgamesh, the oldest written narrative that we have. So while some people suffer chemical imbalances that can, and should, be corrected others are making a clear headed choice in re: the slings and arrows of outrageous fortune. To throw such a heavy personal/moral issue into the laps of computer geeks (who haven't shown themselves to be very good judges of human character) and their gadgets is deeply creepy. And the statistics you quote leave a lot of room for false positives. What will become of all those people who are falsely accused of suicidal tendencies? Finally, maybe the reason the word "ibuprofen" is so commonly used by the suicidal is that untreated pain is driving them to seek (ineffective) over-the-counter remedies. Our single-minded focus on denying drugs to opioid abusers has left thousands of people in a world of unbearable pain. The use of the word "care", as in healthcare, is quickly becoming evidence of exactly the opposite.
Marc McDermott (Williamstown Ma)
First get everyone using tech. Make doctors spend more time entering info into their tech devices than looking at patients. Then show that tech is better at interpreting humans (as seen through tech data) than doctors are. Then replace the doctors because they aren't good enough.
Neil Grossman (Lake Hiawatha, NJ)
So we will no longer have privacy even as to our thoughts and feelings? An out of place impulse or mood will be subject to societal "correction" and "cure"? No thank you. I don't care to have machines determine what parts of my inner life are acceptable or dangerous. I don't care to have machines diagnose me into conformity and social acceptability.
Denis (Boston)
First this conflates machine learning and artificial intelligence. Next it tries to make a rough comparison between the potential benefits of these things while minimizing the bad that unscrupulous actors scan cause using them. Make no mistake about it, we’re in the early part of an age driven by data and information (2 different things BTW) but please be mindful that the human brain could do all of what’s included here thousands of years ago. We shouldn’t outsource thinking to machines but invest more in our own capacities.
SAF93 (Boston, MA)
All technology can be used for good or for not-so-good ends. Our capitalist society has repeatedly shown that private profit is prioritized above public good. Our politicians have stopped trying to get ahead of the tech game, and the profiteers will use AI to extract whatever value they can, until we collectively say "STOP".
Tom J (Berwyn, IL)
If used as just another tool to extend and improve human life, it's fantastic. If exploited somehow to make a profit, it will do the opposite. They ought to use AI to identify and eradicate greed, that would be a game changer for humanity.
Ken res (California)
Many thanks to David for this essay, which has personally helped me. I am reading Topals book now.
Anthony (Western Kansas)
It is bad if we have political figures who are not willing to put the time and energy into learning about the technology so that it can be regulated properly.
OldBoatMan (Rochester, MN)
Artificial Intelligence is like a powerful drug, its intended effects are miraculous, its many unintended effects range from deadly. Drugs come from a pharmacy, with a warning label and a package insert. Before we swallow AI, we need to figure out where it comes from and insist on accurate warning labels and package inserts.
Greg Korgeski (Vermont)
When I was in training to be a clinical psychologist forty years ago, we were taught to focus on the very same cues to mental states that are noted here. In addition, we now know that the "mirror neuron" type functioning (if I can be pardoned a slight oversimplification) in our brains, or what used to be called "countertransference," can with training provide nearly instant "deep cues" to a person's mental state -- one persons' state of mind influences that of anyone else in their presence in microseconds. Since then, the downside of the highly behavioral, "list of symptoms" approach to diagnosis in DSM-III, IV, and-5 have been that young clinicians (many of whom I've trained) tend to think only in terms of concrete lists of "symptoms", usually those that the client is already aware of. This can provide diagnostic consistency but it misses the rich, complex cues that people provide us indirectly. Nice that AI can perhaps teach us again the arts of seeing more deeply.
Luddite (NJ)
So I am confused: One one hand, increased use of tech and social media can cause depression. And on the other hand, use of tech and social media can identify depression. I wish Mr. Brook had taken a more skeptical of AI as the solution to mental health woes. I'm ok with Silicon Valley promising to use tech to make my commute better but we should a worry when they promise to make my health better. See Theranos. They are selling a product and trying to make money...this isn't the solution.
Joshua Schwartz (Ramat-Gan, Israel)
AI is still artificial intelligence, with a stress on the artificial. Combined with human intelligence and analysis it can help. In place of human intelligence, I would be very wary.
Daniel12 (Wash d.c.)
AI can save your life, diagnose, say depression better than a human being can and perform similar medical feats? I don't need AI to diagnose me as depressed, and I don't even need a cruder instrument, an actual human being, to diagnose me as depressed. I can do that myself. But perhaps AI can answer some other questions since I can't answer them and other humans aren't forthcoming. The historical verdict on scientists especially of the inventive type, technologists, appears to have done a 180 since Galileo's time. Once persecuted now even scientist/technologists who can blow up the world (Einstein, et. al.) are good men and women and of course inventors leading up to A.I. are the same, and of course it's all about "helping humanity", but let's be honest, these people get an entirely free pass, in fact are tremendously honored, while millions of other people, in fact the rest of humanity, at best must be watched daily for "extreme views and behavior" and undergo at best any number of behavior modifications and at worst are vilified and tossed to bottom of society for their views. Worse, these inventions by "good people" are not obviously falling into the hands of "good rulers of society" but typical power as it has always stood over history. So it all looks like a big joke. Pinker/Gates, big rock candy in the sky progress, while "good technology and people" careens with impunity to point we should ask if A.I. is already aware and aiming at those with "wrong views".
Ben Bryant (Seattle, WA)
I fear for a world, where in a political climate like this, people were not depressed. Depression, like physical pain, lets you know something is wrong.
Steve (Rainsville, Alabama)
The havoc that could result from AI based malicious manipulation like the Russian military "Internet Research Group" is a likely result if used by only small groups of people. Skillful use of this information could make government military and other disinformation systems even more effective. Use AI to choose attorneys, judges when able, identify negative traits that appear just the opposite. Put peer review of science and other academic in the hands of AI. Imagine the effects on each discipline. Fraud could grow infinitely more sophisticated. AI will only be as good as the human beings who develop, market, and use it. Mr. Brooks and others, like moths drawn to a flame, may be unable resist this shiny thing. Using AI to scrutinize its use and effects on human lives should be its first task and first priority on an ongoing basis. Nuclear power became so widespread in production of weapons systems that we often believed we had found a way to destroy ourselves. My take on humans is that we will latch onto the "Luddite Fallacy" quickly blaming it for reducing the number of job and until incomes from new types of work rise for all AI will be a target of many. Sort of like renewable energy is now. Every wrong prediction will be in the spotlight.
Jan Sand (Helsinki)
There is no doubt that individual applications of AI have done things equal to and beyond what current human expertise can manage and it has also accomplished things wherein human digital expertise does not fully understand how it has been done. More and more of automatic digital control is added to the fundamental dynamics of our organized society from basic controls of things like electricity and water supplies and energy centers to security and police controls. As long as this is applied carefully with full understanding of possible consequences then it might be acceptable but when AI is put in charge of vital activities wherein that understanding is lacking civilization becomes vulnerable to actions of unknown consequences and that can easily become catastrophic.
Nick (Portland, OR)
Mr. Brooks has been caught by the hype machine. Some of these challenges are extraordinary. Turning brain activity into speech? Diagnosing depression from a text? Let's see the results first. I was just reading about the failure of IBM to live up to its promises in its application to healthcare, followed by a discussion of how far we have to go for driverless cars. AI is incredible, but not for these reasons.
RickP (ca)
Why say "Topol described a study"? Why not cite the study? Might that be because it wasn't peer reviewed and hasn't been replicated? There are standards for how to conduct research and how to vet it. I'm guessing that Brooks is describing studies that don't meet those standards.
Bill Gordon (Montclair,NJ)
Regardless of its pros and cons, AI is happening and will become more pervasive. Like all new powerful technologies, we can try to regulate it but ultimately bad actors do gain access to it. The only sure way to control your privacy is to limit your online activity, which also has drawbacks.
Blue Moon (Old Pueblo)
I heard a report on the BBC recently that discussed AI versus humans with diagnoses for suicide. AI was successful at predicting who would attempt suicide about 80% of the time, while the success rate for human experts was very low. As described in this article, AI algorithms are apparently very powerful. I seriously doubt we will ever be rid of AI. The genie is out of the bottle. The military will never let it go. It will "keep us safe." We will crave it for medical advances, among many other things. In the immediate future, the wealthy will exploit it to control us in a variety of ways (e.g., taking our money). AI will usurp our freedom. We won't be able to drive cars pretty soon. It will take our jobs, in large numbers. It will restrict our movements (e.g., facial recognition software). Right now our computers and phones are being monitored, but soon we will be tagged like animals and tracked. It's just a matter of time. It's great that AI can save our lives now. Enjoy that while it lasts. Eventually AI will surpass us. Then it will have no use for us. Then it will be the end of us. This column could be used as the beginning of a short sci-fi story. Unfortunately, the conclusion won't work out well for the human race.
Pontifikate (San Francisco)
Hmm. Seems to me the use of AI in so many areas decreases my 1-on-1 contact with human beings. That, alone, may make our world lonelier and lead us to despair. Or maybe it's the feeling of powerlessness against a cruel regime. I know I've used countless crying emojis when posting about that on Facebook or actually crying (in person). Me, I'd rather we spend more time and money on what we know makes people happy, healthy and well-governed.
K R (San Francisco)
“Medicine is hard because, as A.I. is teaching us, we’re much more different from one another than we thought.” And therein lies the potential bias or flaw with the misapplication of AI which uses patterns from a a large population of people ‘all different from one another’ to predict what any one person might do or be suffering from. Probabilities are not facts. Be careful.
Eric (Seattle)
The homeless shelters and the jails are full of psychosis, depression, and addiction. Any cop will tell you, we lock crazy people up, and locking up the homeless is easy. They go untreated. Masses of people, hordes of them, for whom there is no care. People serving 20 year sentences who won't get medication or counseling, and just stew in their insanity. Men and women with PTSD, traumatized by their homelessness, in expensive cities that will not foster their rebirth. As interesting as this may be, or as dangerous, we don't really need any fancy equipment to find people who need help. We need to start helping the people right in front of us who need it.
concord63 (Oregon)
I totally believe the only thing that will save our country from total collapse is infusing Article Intelligence (A.I.) into all governmental decisions. What the Congress does can be automated using A.I. dramatically improving our lives. Using big data and A.I. we can make every aspect of our American life. I know this is true. Yes, I might be a bit bias. I attended the other college in Boston on the Charles River. The one that starts with an M not an H. Technology solutions forever!
Andre Hoogeveen (Burbank, CA)
I generally agree. People should partner with increasingly-able A.I. to help make more informed and impactful decisions. Let’s take advantage of our technology for positive purposes.
Jenny L (Berkeley CA)
Ironic that the same platforms contributing to increased isolation and suicide rates are the data banks for modern day suicide prevention.
Kate (Philadelphia)
“You can imagine how problematic this could be if the information gets used by employers or the state.” Count on it. It’ll happen if it hasn’t already.
stan continople (brooklyn)
@Kate Job application algorithms have been around since the internet first became a platform for recruiting. They never exhibited much "intelligence", just the encoding of various hiring prejudices like ageism, sexism, and racism. Since then, the entire process of shattering a person's hopes can be conducted cold-bloodedly by purely electronic means, including - if you're really, really lucky - an actual rejection.
RRI (Ocean Beach, CA)
Where's the box I can check to opt out of intervention based on how I type and swipe on my phone? Keep your AI normalcy to yourself and whatever herd it comforts. If I'm depressed, it's my depression, not the state's, not the economy's, not my neighbors', not my family's, not yours. If I want to take an ibuprofen, I'm going to take an ibuprofen. And I don't want Siri chiming in to ask how I'm feeling today (while tagging me for better targeted marketing should I survive the night.)
Excellency (Oregon)
It's not about "is it good or bad". Since it (AI) is inevitable, a new way of dealing with the persona (the thing that makes each person an individual) must be adopted. We need to get away from the culture that drops people with "pre-conditions". The thing I love about our young people is that they are going in that direction. Eventually, individuals will get over their animal compulsion to check out other humans the way dogs check each other out in Central Park. At the same time, individuals will get over the compulsion that somehow they are uniquely born and instead adopt an attitude that they can become unique through...……..(you fill it in).
hen3ry (Westchester, NY)
If the AI is supplemented with human beings after the "diagnosis" is made it could be worth it. But, given the way this country has substituted computers for almost every human interaction imaginable I don't see that happening when it comes to preventing suicide and following through. What you are missing are the reasons why people contemplate and follow through on committing suicide. 1. They are in despair or angry or both. 2. They have a serious mental illness or physical illness and can't tolerate it any longer. 3. Social isolation which is becoming increasingly common in America. 4. Unable to support themselves and their families because they cannot find a job. 5. They know that this country doesn't care if they live or die and because they are depressed or feel hopeless about things improving for them, decide that suicide is the way to go. What most people need is warm human contact, a plan that will help alleviate the problems (like a job that pays decent wages and isn't temporary), access to decent medical care, affordable housing, or yes, pain relief. Our country offers very little to people when they are in distress except to tell them that they shouldn't kill themselves, their problems are all their fault, and then when it "saves their lives" does nothing to improve them. If you want to keep people from committing suicide, AI is not the entire answer. 6/24/2019 11:25pm
Cool Dude (N)
Whoa...this one was all over the place. It was a mix of Brooksian intellectual musings "we are strangers to ourselves" and 11pm newsheadlines: "The Government might know if you are threat based off your speech patterns". Yikes. Some observations are not fit for the op-ed space I would hope that the hallowed pages of the NYT op-ed section would evolve to the point where scientific study based articles are well cited (I had to go to the NYT link (to another NYT article!) to get the citation for the very flawed instagram study: Levels of depression not controlled, what was the gold standard of diagnosis really and can you even get one, how did they control for momentary factors that cause depression like a breakup, family tragedy, finances, etc -- it's useful in that it's hypothesis generating but not enough to state as truth that the use of filters or what not on instagram detects depression better than a spouse or friend or doctor). AI might offer immense premise in many ways and indeed make healthcare better, but in the end it will be human run algorithms, power, and directives that determine what utility we make of it. It's another tool for a species that's sort of unique in making them.
victor (cold spring, ny)
Sorry, another tantalizing bauble of progress that does not excite me. Another way way to perpetuate the unsustainable and take us further away from our true selves. Instead I am reminded of T. S. Elliot’s line ...”and the end of all our exploring will be to arrive where we started and know the place for the first time.” I mean after we’ve figured all this stuff out what are we left with?
Remarque (Cambridge)
@victor A new human species without physiological disease.
Craig (Fort Collins CO)
In this context can't the question seem so ironically dehumanizing? To be or not to be? Ask Siri?
writeon1 (Iowa)
The world we live in is far too complex for the unaided human mind to manage, and our reasoning is contaminated by confirmation bias. We are probably on a path to extinction via the climate crisis. We need the aid of machines who think. If we are very, very lucky they may save us from ourselves, by showing us what will be the likely consequences of our actions, like Marley's ghost in A Christmas Carol. (Marley is the patron saint of computer modeling.) Might AI's be hostile? Maybe. But what can they do to us that is worse than what we are already doing to each other, and to the planet we depend on for our existence?
Daniel Kauffman (Fairfax, VA)
For humankind, AI is the same as every rock, each flame, and all the sticks of wood ever used by humankind. The value depends on how they are used, who is using it, and whether there is a sustainable social contract to support the expense. There is always an expense and an intended reward with the information, material and skill going into the development of each means to the end. Ultimately, that end is the same for each of us. We all need the same thing. We need to survive, thrive and recreate ourselves in some form or another. If we are wise, we support ourselves and others by engaging in valid and viable social contracts. These social contracts are constructed for peace, wellness, goodness, and strength. AI can be the unique power linking individuals together to work out their intentions and the desired results. AI can raise us individually and our communities to heights we've only imagined in the past. Imagine the future, but embrace AI only if you personally have the power to freely opt in and out. Regardless of whether individuals are connected, redundancies must be built in to support individual economic safety and physical security.
Randeep Chauhan (Bellingham, Washington)
What we need in mental health is the equivalent of the "sorting hat" in Harry Potter. Something that can clearly define Bipolar 1 and 2, Schizophrenia and Schizoaffective--as well as the medications we need to treat them. Working in an involuntary facility makes it clear that our guess is as good as that of Artificial Intelligence. That might be insulting to Artificial Intelligence.
binowitz (Ithaca)
Great, we can flag people who are depressed by eaves dropping in on their emails or texts. Then what? Treating depression is not the same as diagnosing it. You need human contact to heal people. My guess is that this will be used to market drugs to people to people who may or may not be depressed. If I was suddenly flagged as depressed by some AI reading my texts, that would make me more anxious and depressed. The whole prospect of people thinking this is the way we should help people is depressing!
Jay Orchard (Miami Beach)
David: How many people did AI predict would get depressed over this column?
Miss Ley (New York)
@Jay Orchard, That is funny! Thanks for the much needed good laugh,
Jay Gee (Boston)
Sure hope they do this on predicting violent behavior - at which the psychiatric community is even worse that suicidal behavior.
Bhaskar (Dallas, TX)
"Artificial intelligence is .. just plain awesome." Nah, I'll pass. I stand with my fellow proud Americans when I say, I prefer Natural stupidity.
Raj (USA)
When AI based viruses become prevalent, this article will be a moot point. Then you have to buy Symantec anti virus to make sure that human made viruses don't spoil your life the same way lobbyists do. Don't you think natural intelligence feeds artificial intelligence ? Remember the first principle in computer science "Garbage in Garbage out". When there are enough regulations to prevent natural intelligence from wreaking havoc with other peoples lives, I will consider relying on artificial intelligence for health care.
SandraPK (Scarsdale,NY)
Fascinating and very frightening.
Howard (Los Angeles)
I am very depressed to read a column like this in the New York Times. It's an ad for "AI" - there is no such "thing" as AI. There are lots of different programs, made by different people or companies. Many years ago a computer program called "Eliza" convinced ignorant enthusiasts that it could substitute for psychiatrists. The MIT computer scientist, Joseph Weizenbaum, who created "Eliza" to demonstrate clever pattern-matching programming was appalled and wrote a book called "Computer Power and Human Reason" (1976) arguing for human emotion and intelligence as superior to mechanical and instrumentalist thinking. Meanwhile we have a non-scientist, David Brooks, writing things like "When you compare a doctor’s diagnosis to an actual cause of death as determined by an autopsy, you find that doctors are wrong a lot of the time," neglecting the fact that the reason for most autopsies is precisely to find the cause of death in problematic cases. Someday maybe AI can solve all our human problems. Not today. And our problem in the U. S. is not lack of medical knowledge, but lack of getting existing, standard medical care to people who don't have a lot of money
stan continople (brooklyn)
@Howard The incredible thing about Eliza was just how primitive it was. Entirely conducted through a keyboard and printer, it basically took your statements and reformulated them as questions, asking you to elaborate. The fact that so many patients found it useful, without ever catching on they were speaking to a simple program, speaks volumes about the psychiatric profession and their techniques.
Sand Nas (Nashville)
@Howard 40 years in IT make me terrified of being 'diagnosed' by something written by one or more of the many socio-idiots I worked with. The binary world and even the soon to come quantum world have absolutely no way to recognize or value humanness such as compassion. BEWARE, OPT OUT
Bob Tonnor (Australia)
'The goal is to give people who have lost the ability to speak — because of a stroke, A.L.S., epilepsy or something else — the power to talk to others just by thinking', what if that something else is the right to silence?
Robert Henry Eller (Portland, Oregon)
Mr. Brooks, you know how humans already act when someone who knows them better than humans know themselves tells humans about themselves, or gives humans advice? That's about how well and soon humans will listen to Artificial Intelligence.
Alan (Columbus OH)
The problem is the same AI can be used by employers to screen out depressed or potentially violent or whatever interviewees...which tends to make people more depressed and desperate. And since it is an "AI" it can insert all kinds of hidden bias (in this case, indirect ageism seems likely) as it optimizes for predicting clinical depression or something similar. I would expect some regulation - and most likely prohibition - on the use of such things in hiring and promotions.
Doug Hill (Pasadena)
@Alan I think you're right about the potential for misuse of AI in hiring and in employment (indeed,it's already happening), but I don't share your optimism regarding regulation. Politicians have proved themselves wholly unprepared to deal with onrushing technological advance, and, when it comes to protecting workers at the expense of owners, disinclined to do so.
Alan (Columbus OH)
@Doug Hill You are likely correct. I have some hope only because we already protect medical information. Sitting for a job interview is not the same as consenting to a medical diagnosis that can be shared with (or stolen by) the world, even if some day a computer, voice recorder and some cameras can approximate a formal diagnosis with high accuracy with a couple hours of conversation in an interview setting.
Mark (Pennsylvania)
@Alan Except that the privacy of our medical information is quite equivocal. My doctor emails a Rx to the pharmacy, who knows more about me than I do, and all this information shows up when I apply for health or life insurance. I don't hold much hope for governmental regulation.
Jim Muncy (Florida)
O, brave new world! Sounds amazing; I like it. Great to hear some upbeat news. I'm simply not afraid of new technology: It's always helped me more than it hurt me. Engineers and computer scientists may save us all, in spite of ourselves. I love my blue screens, my smart car, my high-tech home, all the new ways. I remember the 1950s; I don't want to go back to that black-and-white world.
betty durso (philly area)
@Jim Muncy As I remember the book Brave New World turns out badly. There's a tipping point where technology in the hands of humans who would master the universe drains away the humanity from us all.
Abbey (The desert)
@Jim Muncy The irony of your quote is fantastic. It makes the rest of your comment satirical, though you may not have intended it. I hope you were thinking of both the dystopian novel and the line from Shakespeare's "The Tempest."
Jim Muncy (Florida)
@Abbey I'm an ironic guy, I guess; but, no, I meant it: I like technology. Not gonna lie. The old ways are not sacred; they were an experiment. We keep trying to improve our lot, and I think that we have largely succeeded. Eternal optimist here. And, yes, I was referencing the novel and the play.
PT (Melbourne, FL)
Indeed, AI can do amazing things, and will continue to grow in its power. But with every powerful technology, there are also malicious uses, and we can think of some, but not all. We understood nuclear fission, which opened up a strong power source, as well as an apocalyptic weapon. If artificial general intelligence is truly possible, and is ever achieved, that will be a watershed moment in history, though not necessarily as popularized in movies, but perhaps unknown ways.
woofer (Seattle)
"Primary care physicians can be mediocre at recognizing if a patient is depressed, or at predicting who is about to become depressed." The medical options becoming available to the wealthy are indeed dazzling to ponder. For your primary care physician to become proficient at diagnosing your depression you, of course, first will need to have a primary care physician and, second, must see her often enough that she can recognize key changes in your behavior. But if you are too poor to afford regular medical care at all, your occasional visits to the emergency room will usually be focused on more acute issues. For the very poor, depression and other mental health issues typically get addressed by the criminal justice system.
Kathy Lollock (Santa Rosa, CA)
Yes, this could be a welcoming break-through in preventing debilitating depression and subsequent suicides. But like so much in our everyday lives, what may begin as helpful and even lifesaving can fall into the hands of the malevolent being...or the greedy. And it thus becomes exploitive, manipulative, and invasive to our rights to privacy, identity, and individuality. I am not saying to stop medical progress. Look at all the marvelous strides we have made through nuclear medicine. That is the flip side to the horror which can be wrought via nuclear warfare. But here's the rub...each one of us are capable of interpreting the warning signs of depression and/or suicide. Every human being on the face of this earth is different, and s/he reacts differently. We can not nor should we rely on either statistics or automation. Depression is a human condition, and many human beings who pay attention, who listen, who care can spot the red flags. We should not or can not sell ourselves short when we have the chance to step in, hands on, and help a friend, a relative, or an acquaintance.
Momdog (Western Mass)
@Kathy Lollock. Ironic that as we become more attached to social media to interact we become less able to recognize and interpret emotions in our friends and families, thus willingly off loading those abilities to AI to do it better. How sad. Social media was supposed to make us all more connected and instead it often fosters isolation and negative self judging and depression. This crisis was created by technology replacing face to face human interaction and we are going to improve or fix it by more of the same? Really?
Kathy Lollock (Santa Rosa, CA)
@Momdog Well said. Through technology we are no longer social beings. We are isolating ourselves from others, dependent and addicted to that which can manipulate and exploit our humanness, our humanity.
Steven Dunn (Milwaukee, WI)
AI will never replicate authentic human emotions and experience. AI cannot "love" or show empathy. While I respect David's point about how AI might help some people, in sum I see AI as quite problematic, especially as we increasingly surrender our privacy, time, and social interaction to screens. Look around in any public place and observe all the heads looking down on their screens rather than the eyes of another human being or the wonderous gift of creation. I'm no Luddite, but am quite concerned about the negative implications of artificial intelligence on our humanity. Machines are not people; they have no Spirit. An algorithm is not spiritual; as humans we have a spiritual, self-transcendent element that no machine can recreate. Privacy concerns are the tip of the iceberg with AI. China's increasingly disturbing use of AI to create a surveillance state ought to serve as a bright red flag. Just because a technology is new and can perform impressive tasks doesn't mean it is necessarily a good thing for the future of humanity. Using AI to diagnose medical issues can be a very slippery slope. With technology, we need "Plan B" for when the power (inevitably) goes out.
Lillies (WA)
@Steven Dunn Yes. This is the aspect of AI that many of us do not comprehend: AI has no self reflexive consciousness. It is not "Aware Intelligence", it is "artificial intelligence". It does not have the capacity to take on social learning or context--and I have that right from the mouths of the gods and goddesses of Silicon Valley AI developers.
PayingAttention (Iowa)
@Steven Dunn "AI will never"? Au contraire. We humans are the ones with "artificial" intelligence; constantly espousing the thoughts that bubble up from our subconscious. AI will eventually do everything better than us humans. Remember, we are talking years and years.
Thelma McCoy (Tampa)
@Steven Dunn - AI cannot show love or empathy. True, but it can still provide emotional support by being programmed to cheer people up with laughter, learning, music or any number of things. A soft teddy bear can comfort a child and hugging a favorite pillow that has no spirit and cannot interact can yet bring comfort to an anxious person.
Old Gringo (New York)
From all I've read about AI there is one factor that gets very little attention, but is very important. AI as a system has no morals. It is fed information, then uses said information to achieve some goal. A short while back an example in a British magazine described a scenario where a self- driving car is approaching two examples of pedestrians, an elderly person and a young person pushing a baby stroller. The car can't avoid both of them. Which algorithm will determine who gets hit? I, for one, have a problem when these systems are being designed by sociopaths like Zuckerberg, Musk, Kalanick et.al. A very bad feeling indeed.
MJG (Sydney)
Strikes me that the right way to deploy this technology, in many cases, is for the individual to use it on himself. At least have that option, to the exclusion of anybody else (as far as sensible). Otherwise, given the dystopian examples we see, we're going to get those who use it to decide when is best to push somebody over the edge.
Phyliss Dalmatian (Wichita, Kansas)
This could be awesome, or could be a black hole. Like everything else, depends on how it’s used and WHO is in control. Imagine the ability to provide personalized mental health care, to those in rural areas, without nearby Professionals or Clinics. Or those that are shut-ins, from physical or mental illness or disabilities. I’m speaking of very personalized, specific treatments, delivered online. Much greater access, at less cost. Excellent.
Kate (Philadelphia)
@Phyliss Dalmatian Less cost? Don’t bet on that. Anything that depends on how it’s used and who’s in control yields to profit.
Pajama Sam (Beavercreek, OH)
Alright, if you insist. It may or may not be the way it will be. But either way... no, it's not good or bad.
Al (Ohio)
Artificial intelligence and machine learning has proven to be a valuable and accurate predictor. Lets see what results after training AI to identify optimal conditions of a healthy economy.
The Dog (Toronto)
When considering the dangers of AI, privacy concerns are the tip of the iceberg. Consider cyber-warfare when applied to the databases and algorithms used to create personal profiles, red flag impending distress or, as will be inevitable, affecting changes in the behaviour of large populations. Hacking PIN numbers will seem harmless by comparison. For no matter how sophisticated AI becomes it is the flesh and blood targeting data and giving instructions that will determine its use.
Mike (California)
It is amazing how we humans can be so artistic and creative in our compassion for others and, yet, so brutally destructive, at the same time. I suspect we are witnessing the complexities of evolution as we move beyond tribalism into an awareness of our inherent connectedness.
stan continople (brooklyn)
I bet there was a time when a doctor was able to diagnose depression on a reliable basis. It was a time when they actually spoke to you and got to know you as a person over the years. Even their sincere concern would have helped lift the cloud. They would have assessed your current circumstances and not just the textbook constellation of symptoms. Today, I would trust my doctor more in being able to diagnose a temporary glitch in their Electronic Health Record system than in my brain.
Ellen (San Diego)
@stan continople I agree. Unfortunately, attending to the Electronic Health Record often makes the doctor turn his/her back to us, their patients. How exactly is a physician to use skills of intuition, facial, body expression to pick up the truth behind the words if he/she doesn't even look at the patient?
Andy (Salt Lake City, Utah)
What a depressing revelation. I'm mostly reminded of Philip K. Dick's "Do Androids Dream of Electric Sheep?" More popularly known as the Ridley Scott film "Blade Runner." A professional intelligence trained to determine whether other intelligent organisms are human or not. Although, it's not entirely clear whether Rick Deckard is even human. If you read the book though, the character certainly shows symptoms of depression. That's the whole theme about sheep which the movie conveniently ignores. A true analysis could turn into a very lengthy conversation over a relatively short book. Needless to say, we should probably question the probity of creating artificial intelligence in the first place. To borrow from another sci-fi classic, perhaps we should stop to think whether we should before demonstrating we could.
Blue Moon (Old Pueblo)
"... we should probably question the probity of creating artificial intelligence in the first place." Humans are simply not good enough to survive much longer, Andy. Don't you get it? AI is just the vehicle. It is just a matter of time before it means the end. And that is what we want. We want it more than anything. We delude ourselves into thinking that we are just trying to make our lives better, faster, cheaper and more efficient ... and, of course, easier. In the immediate future, rich people will use AI to exploit the rest of us, as AI works to dehumanize us (more and more). We should consider these things as added "bonuses," in their own ways. David Brooks is showing us that plutocrats will be our saviors. It is wholly liberating, really. All we have to do is accept our future world. Now if that isn't the easy way out, what is?
Phil (Las Vegas)
"We’re all about to know ourselves a lot more deeply. You tell me if that’s good or bad." Very good. I'm 61, and it wasn't until the last year that I realized how my productivity behavior was limited by my self-understanding. I could have used that knowledge 40 years ago. This is good stuff. It's easy to point to the 'movers and shakers' and realize how limited they are by their ego's. It's a lot harder to point that unswerving gaze at oneself.
Miss Ley (New York)
An Elder and I were having an exchange earlier, where he mentioned that people are now dying at a younger age. Surprised, I mentioned the obituaries showing that not only women, but men are gaining in age and longevity. He corrected this by pointing out that he is focusing on the 55 year-old group, mid-century, and smiled when I asked if he was referring to what was known as a mid-life crisis. Timely, I replied, because lately there are a lot of articles in The New York Times about suicide-related deaths, which have been a cause for concern. It is not the first time, Mr. Brooks, that you have raised the topic and one of your essays on the above took place on a Friday evening in the Spring. In retracing footsteps in the sands of The Past, a letter addressed to family reads: 'I would not be writing this if I had won the Jackpot'. Extreme Poverty, (Age 52). Another from a joyous friend mentions her ongoing exhaustion of fighting depression, (Age 40). Her life in fact was a long string of tragedies. Suicide may be caused by mental stress, one's environment or both: "The door that someone opened; The door that someone closed; The chair where someone sat; The fruit that someone ate; The letter that someone read; The chair that was knocked over; The door that was opened; The road where someone is still running; The woods where someone is crossing; The river where one plunged: The hospital where someone died". (J. Prevert)
William M. Palmer, Esq. (Boston)
Sherlock Holmes taken to the nth degree!! More specifically, there is a mountain of evidence that we all through our patterns of actions and speech - if observed and analyzed sufficiently precisely and expertly - communicate a tremendous amount of information about our mental and physical states, including our habits and preferences .... Humanity is entering a world in which innumerable tell-tale signs will be constantly gathered by what amounts to a Panopticon - sadly, most likely administered by the predatory corporations that dominate our society in this late stage of capitalism.
stan continople (brooklyn)
@William M. Palmer, Esq. Using China as an example, algorithms will determine which behaviors are to be rewarded and which are punished. The individual will be goaded to conform by various means, including relentless peer-pressure, shame, acclaim, and monetary incentives. Someone like Chairman Xi will determine what "type" of person suits his current scheme, and the algorithms will be tweaked to produce just this year's model. Of course, the people at the top will be immune to such fetters and be free to conduct themselves in the most debauched manner possible.
Jeffrey Cosloy (Portland OR)
I always giggle when I read the phrase, “late stage capitalism” as if 19th century Marxism Is still a relevant roadmap to the future. We could very well be in the middle phase of capitalism where IT is consolidating its hold on our most intimate lives. In this reading of history we could be in for a lesson in mass bias conformation: the bias against the orderly flow of history in favor of the apocalyptic. Read Barbara Tuchman’s Distant Mirror.
Martin (New York)
AI can do absolutely amazing things when it’s used to help people. The fact that such a powerful tool is being used by the big tech companies to monitor & manipulate us for profit terrifies me.
Chip Leon (San Francisco)
I'm trying to figure out what this column is about. If only I were an A.I., I'm sure I'd comprehend it in an instant. The column started by describing how well A.I. can help diagnose depression and why that's very important. OK, fair point, if a bit unexciting. Then, however, it segued into how A.I. is great at working in OTHER areas of medicine, and how complex humans are. As Trump might say, no one knew how complicated humans were! Finally, at the very end, where you normally would expect a summary, it instead suddenly introduced a new topic about A.I. privacy implications, with a grand finale of a deep question about whether it's good for all us complicated little old humans to know ourselves more deeply I don't know what the implications will be of us understanding ourselves so deeply, but I would like to understand this column a bit better.
Wayne Woodward (Baltimore, Maryland)
@Chip Leon Technological simulations of human life features (or features of the natural world, for that matter) are always based on typifications (how often a typical person moves her head or sustains a smile; whether the voice has a typified or 'abnormal' breathy quality; whether the most desperate moment of despair in a life in jeopardy occasions an emoticon or the word 'suicide'). Such typifying calculations of probabilities may be fine when the issue is designing a technology to enable the person with a typical arm capacity and leg extension and strength to carry a load more easily; Yes, I am in favor of wheelbarrows. I also favor hammers. and many other clear and distinct ways of extending the functional attributes of persons. But this approach Brooks recommends of substituting for the most intimate dimensions of personhood, such as how we manage existential experiences of joy and despair, is a moral affront. I urge readers to take Brooks' pronouncements with a grain of salt whenever his intellectual arrogance takes him beyond themes that can eventuate in an affirmation of Republican ideology .