How Artificial Intelligence Could Transform Medicine

Mar 11, 2019 · 42 comments
Norbert Voelkel (Denver)
Good luck, Eric Topol--another example of the opportunistic hype you are famous for.Let's recall your breathless endorsements of gene therapy and the embracing of other mirages following the human genome project.Las Vegas would be the right place for you to practice now. There is no chance that A.I. will make doctors make touch patients , lay there hands on ----as they used to.Machine medicine is here and A.I. will perfect machine medicine.A.I has no soul. Medicine used to be science----and art. Eric-----read William Osler, may be for the first time?The mind of the machine is not human. The roots of medicine , deep as they are, have been in 'alchemy', and it was Paracelsus who was the first who understood how to apply alchemy to the healing arts.As you well know, simply put, medicine continues to provide two challenges: the correct diagnosis [and here A.I. will help] and then there is the treatment of the human condition (la condition humaine ).Every doctor practicing in a hospice knows about this. Do You?
Greg Latiak (Amherst Island, Ontario)
Medical diagnosis has been a fascination of AI development and expert systems for a very long time (decades). The problem always is that they are no smarter than the people who trained the algorithms. So the potential for stupid mistakes and mis-diagnosis still exists. The danger is that there is always a tendency to 'trust' the computer... so a lazy or overworked practitioner might leave too much to the AI -- to the patients detriment. But in a time when corporations are rushing to replace complex human judgement and experience with easily replicable machines that will probably not be an impediment. We will all be net worse off.
M (NY)
The general rule in tech is to first send the job offshore. Next, automate it and eliminate the need to offshore. Radiology fits this pattern well. Radiologists won’t disappear but will definitely shrink in number.
J. Parula (Florida)
I have some questions about the adjective "deep" in the title of the book, which is vague, has many connotations and seems to refer to the deep deep learning techniques. But, many of the AI algorithms in medicine are not based in deep learning. Pattern recognition is certainly an area in which AI algorithms can help and also speech recognition, but understanding natural language is a very open problem in AI. Matching words is not going to make it. We lack algorithms that correctly parse sentences and then determine the grammatical subject, direct object, indirect object, and prepositional phrases. The problem of meaning is even harder. Consider the differing meaning of "with" in the sentences "She ate the spaghetti with a fork/cheese/her friend/great difficulty." The meaning of "with" changes with every noun that may follow the preposition. Common sense knowledge is required to understand most sentences in natural language, and that knowledge is between the lines and is not stated in the text. I missed in the interview a question about Watson, which has been trained and tested in Memorial Sloan Kettering. We know now the limitations of Watson. I am afraid that if we do not stop the AI hype we are going to have another AI winter, or what is worse: we are going to confuse the general public beyond any repair. So, yes AI is great and a wonderful area of research, but please temper the hype.
Rob D (Oregon)
The rewards and risks of deploying autonomous computing to medical decision making takes a different light as Boeing confronts likely software bugs and perhaps faulty, or at least misinterpreted, sensor input in the design and implementation of algorithmic stall control for the Boeing 737 max 8 and max 9.
Kara Ben Nemsi (On the Orient Express)
"But it will require substantial activism of the medical community to stand up for patients and not allow the jump in productivity to further squeeze clinicians, upending the erosion of the doctor-patient relationship." DREAM ON!
Kara Ben Nemsi (On the Orient Express)
" But it will require substantial activism of the medical community to stand up for patients and not allow the jump in productivity to further squeeze clinicians, upending the erosion of the doctor-patient relationship." DREAM ON!
Krautman (Chapel Hill NC)
The American medical system is a cartel. Where will AI fit in to this controlled system?
Boregard (NYC)
My former, elderly, old-school family doc, from the 80's...was a remarkable diagnostician. For his time. He out-diagnosed the specialists he sent my family to, which of course angered them to no end. But when you're right, you're right. He often said that the reason for the rise in the over-testing, and the taking over of the profession by narrowly defined specialists - was mostly due to the shift in the medical student body. Not their bodies, but the general student body. Doctors like him, he said, were for the most part students of the science. Where the young ones coming up behind him, were students of the profession. They were there to be a "Professional". Whereas my old Doc, was obsessed with the science, the details. Not sure if he was wrong or right. But my experience since, watching my parents age, get sick, and pass, and myself grow older, go thru some "stuff" - the doctors I found most comfortable and had the greatest successes with - were INTO it for the science. They were nerdy, all into the details, and were still in awe of the science - NOT themselves. That's what we'll get from AI. Deep dives into the science, and the ever growing depth of data. Of course it will lack the human aspect, lack the emotional charge, the human connection. And there will likely be some disastrous mistakes. I dont trust AI as a magic bullet. No way. Not in medicine, not with transportation, not anything. Not with society in any way. Only we humans can handle those problems.
Neil Raden (US)
Eric Topol is usually on the right side of the argument for better healthcare, but his enthusiasm for AI in medicine is troubling. AI for medicine is fraught with ethical issues. I've discussed this in a recent article. https://diginomica.com/modeling-humans-for-personalized-medicine-a-prescription-for-trouble/
Carl Mudgeon (A Small State)
@Neil Raden I concur. At this point in his career he ought to have some "deep cynicism." Among other things, he appears to marvel at the absence of US national "planning" in the medical AI field-- this simply means that the planning is being done by industry (e.g. AliveCor) which will pick the low-hanging profit fruit and make use of opinion leaders to market their electronic products. Caregivers and patients are merely the means by which the products fulfil their profit purpose.
Charlie (Key Biscayne)
When I was a kid, doctors had special MD license plates. Restaurant reservations were made under "Dr. Jones" to get a coveted table on a busy night. Docs enjoyed societal prestige and power. It's not surprising that most obstacles to healthcare reform over the past 30 years have come from physicians and their representatives (e.g., AMA). There's far too much process variation in medical care, resulting in avoidable waste, cost, and tragic harm events. Variation comes from doctors practicing the "art" of medicine rather than the "science" - ironic given the physician's scientific training. Data, analytics and technology take a back seat to gut and intuition. Think about the impact on our aviation industry if pilots were given the same latitude to literally fly by the seat of their pants. Interesting that most of the physician commentators here have been in the profession for a long time. I don't blame them for lamenting the loss their license plates. I wonder if the next generation feels the same way. AI is not the silver bullet, neither are EHRs or any other technology. However, there's no doubt that they can improve the quality and lower the cost of care...but only if the Luddites get on board.
Barbarika (Wisconsin)
@Charlie Lets ignore your envy of doctors for a minute. The deeper point you raise is that human disease is a purely deterministic process, and can be handled by a rigid rule based system. I think you will do better to spend some time in ER or even in a cancer center. The tremendous diversity in the evolution of identical diseases in different patients will quickly disabuse you of the notion that treating a patient is like flying a Boeing. Then there is the issue of mind-body interaction, where arguably one entity: mind has not been proven to be completely deterministic. Medicine needs to be more and more personalized whether with AI help or not.
Becks (CT)
"But it will require substantial activism of the medical community to stand up for patients and not allow the jump in productivity to further squeeze clinicians, upending the erosion of the doctor-patient relationship." That's going to be a problem. Absent some regulatory controls, as AI frees up more clinician time, insurance companies are going to reduce the use of clinicians, not give clinicians more time to interact with patients.
John (New York City)
As a practicing radiologist for many years, as well as a computer enthusiast for over 35 years, we are losing the Art of Medicine to these computers, which slow down the process of practicing medicine. The original teaching were that 90% or so of the diagnosis was made from the history and 5% examination and 5% tests. Now it is more like 80% lab tests and xrays and not enough time devoted to listening and observing. Computers really don't do that. I watch as my internist sits next to me typing all the answers into the computer but fails to hit on the important things that might concern me, because they are not in the algorithm or in the computer questionnaire. AI in the form of CAD (Computer assisted detection) in mammography has been around for quite some time. There are radiologists that recall patients because CAD (the all-knowing computer) has marked an area, and then other radiologists instead, review the marked area and most of the time dismiss what is marked. If this is the AI, but fancier, in mammography, there may be many burps over many years before WOPR in the movie War Games, learns that in thermonuclear war is like tic tac toe - No One Wins. There is that visual acuity, which the computer may have and help many radiologists and then there is that gut intuition that pervades medicine that can't be taught to computers. Let us expedite medicine so that the doctor can talk again and spend time with the patient and arrive at an answer with fewer exams.
Tony S (Connecticut)
One big problem is malpractice / liability. Who’s going to be responsible when A.I. makes the wrong diagnosis, leading to injury or death? We know that will happen, since no system is perfect. When that happens, are injured patients then supposed to sue the software company? If A.I. use becomes widespread, even a 1% mistake rate will produce enough injuries and deaths to bankrupt the company behind the A.I. program. One may wonder if all of this A.I. hype in healthcare is mostly to generate money from venture capital and IPOs (while the people involved know quite well that their use will never become widespread).
Imperato (NYC)
never say “ never” seems to be a lesson lost on the the good doctor.
DMH (nc)
The next-to-last question interested me the most --- AI diagnoses for mental health. If "red flag" indications of mental illness can be a criterion for gun control, how does one subject suspected deviants to such a diagnostic tool? George Orwell's Big Brother might have an answer to this, eh?
HipOath (Berkeley, CA)
When human beings talk to each other, there is frequently some mis-communication and missed communication, even between doctor and patients. Very good doctors get the information they need to make the correct diagnosis most of the time. One way that a computer could be helpful is by providing detailed questionnaires which follow the logic created first by the patient's social/medical history, e.g., age, sex, smoker/non-smoker, etc., and then by the nature of the complaint, e.g., problem? pain, pain where? in the lower right abdominal quadrant; on a scale of 1 to 10, 10 being the worst pain, how would you rate it?; if you press on that part of your abdomen does that cause the pain to increase? etc. Filling out a detailed questionnaire does not mean that a doctor will not go over the questionnaire with his patient, or that a doctor won't ask questions that are not on the questionnaire. But the completion of a detailed questionnaire can form the backbone of the communication which is going to take place between the patient and her doctor. It would relieve the doctor of having to write down all the information that is already on the questionnaire, and it may contain information that the doctor, on reflection, will consider important, even if at first, she didn't think a particular piece of information was important. So, I think that intelligent questionnaires could be very useful in medical practice.
Stephen Rinsler (Arden, NC)
Dr. Topol is an enthusiast for replacing human imperfections with machine imperfections. I am a retired physician who practiced primary care and subsequently was a clinical drug researcher. I love using a computer. The difficult trick will be to develop machine aids that enhance the physician, rather than seeking to be a replacement for human intelligence. The current abysmal electronic medical records are a dire example of how to make care worse rather than better. As far as I can tell, they generally are used for nothing but a repository for physician notes (and possibly to send out visit reminders and assist with billing?). The cost is a physician who has to focus on his/her laptop rather than the patient. One would like the machine to note what transpires in the patient-physician interaction WITHOUT any action by the physician. Then, the physician could subsequently review this material. Additionally, the database of patient records should be EASILY analyzed for meaningful “signals” that the physician and/or her/his staff could use to improve patient care. Let the technology be improved before calling for its widespread use. Stephen Rinsler, MD
Unaffiliated (New York)
My original training and experience has been as a practicing physician. However, recent changes in medical practice, (the electronic medical record, etc), have forced me to become both a typist and data entry clerk, neither of which I enjoy or show much proficiency in. I became a physician and remained one primarily to help people, requiring both a heart and a soul to accomplish. However, a wonderful and personally intimate profession has become a soulless, cold business with computer screens for faces and processors for brains and hearts. National insurance companies help to ration care, and physicians are no longer physicians but, rather, have become “providers” in the same sense as any technician performing a service. We are operated on with robots and looked after with telemedicine screens. This is not natural; this is artificial, impersonal and, to me, abhorrent. I may be tilting at windmills, but I remember a better time. How about you?
Susan (Nashville)
Well said. Fortunately for me as a colorectal surgeon, we don’t yet have a robot to do the Rectal exam.
Unaffiliated (New York)
Thanks for your feedback. Like you, I was trained to examine and treat my patients using my senses of touch, sight, smell, and hearing. I actually use a stethoscope and my own two hands. I don’t need a robot or any other surrogate to do my job for me, and my patients appreciate this. Neither they nor I appreciate the diversion of attention required to fill out the electronic medical records forms. Likewise, neither they nor I appreciate the frustration of dealing with robot - like insurance companies and their talking heads.
KBronson (Louisiana)
@Susan Other than specialists such as your self, we don’t have doctors to do rectal exams either. Four Primary Care physicians in the last 20 years, Two male internists, one female FP, one male FP, all say they don’t do them and tell me to go to a urologist. Which is absurd. So none has been done. Next I guess I have to see a cardiologist to get my blood pressure checked. I left clinical medicine the very last day that I was able to successfully resist using the EMR and the impact has been worse than I anticipated.
DEF MD (Miami)
Like nearly all of contemporary discussions of AI, the article vastly overestimates the capabilities of artificial intelligence and underestimates the sheer computational power of the human cerebral cortex. AI has not shown that it is capable of anywhere near our processing power - particularly our visual processing - and, despite all the hyperbole from vested parties, AI cannot even drive cars safely under real world conditions - I think AI, truly in its infancy, should aim far lower - When an AI machine can match the intelligence and abilities of a dog, then maybe they will be on their way to matching those of a human
Calleen de Oliveira (FL)
Just read the book "everybody lies" and he had a few good examples how AI can help healthcare. But as a nurse and working at the VA, mostly they want companionship. They come in almost daily to see nurses and docs and want to visit. Until we get the loneliness factor handled it really doesn't matter.
G. Marks (Alfred, New York)
The problem, as I see it, is that incorporating AI into the treatment room means doctors will be even more focused on screens than they already are and not focused on the human being sitting in front of them. I go to the doctor now and they go right to the screen , no eye contact - the whole interaction is about the screen and inputting data. This is not what healing is. Doctors need to sit down next to their patient , look at them, establish some kind of rapport, ask them how they are... and then listen. Then go to the screen and type if they have to. AI might have a clinical value but if it further removes the doctor from their patient, and the distance is already huge, what passes for the practice of medicine will have completely lost it's soul.
Gary (Brooklyn)
There is plenty of room for bias and failure in AI. John Giannandrea, head of AI at Google, says “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.” Those doctors that keep doing stuff that is not proven, like prescribing opioids, may well be training their AI software to do the same. AI is like a microscope, a tool that needs guidance.
oogada (Boogada)
@Gary "AI is like a microscope, a tool that needs guidance." Oh yeah, you mean like capitalism, say, or democracy. Because that's working. In the name of America, of nobility of spirit, of adventure and freedom, we resist to the death the very regulation everyone from Adam Smith to Alexis de Tocqueville insists is a base condition for democracy and capitalism to have any hope of success, at least in the absence of extraordinarily ethical, selfless, committed leaders and elites. All this fake ethical hoo-hah is offensive. This is expensive, elitist, technology. The only mechanisms offering even upper middle class patients hope to participate, private insurance, exists solely to deny coverage, to deny help, to those who need it most. Like bank loans and mortgages. You can cavil about the excellence and the opportunities of these technologies, but they are a rigged game from the start, and meant to be. The technologies themselves may be wonderful, or potentially so, but not yet. Not yet. What's happening now is experimenting, a lot of (maybe well-founded) hope, and a lot of close your eyes-push the button-hope for the best. Just, don't pretend this is a boon for all mankind. Not until the very rich have had their turn for a half-century or so, and something better is on the horizon. In the same vein its kind of cute, all these global meetings of scientists to declare manipulation of baby genes out of bounds. Horse gone, door hangs open.
Jaque (Champaign, Illinois)
One vital point missed in this discussion is the ability of AI or computers in general to absorb vast amount of information that is published in every field. No human is capable of absorbing all past publications in any field, let alone in medical sciences. In health sciences two examples come to mind - depression treatment and cancer treatment. In both cases, no single treatment works and to find the right one from the past experiences is where AI can do far better than humans. Another example is collection of data of a couple of hundred million people who are healthy and learn from them why they are healthy. Such an effort is underway by NIH in the All of Us program. https://allofus.nih.gov/
Neil Raden (US)
@Jaque The problem, which became abundantly evident with Watson, is how much of journal articles are ghost-written by pharmaceutical companies and how many clinical trials are reversed in a few years. This vast amount can lead to over-calibration and confidence based on a soft foundation.
Richard (Palm City)
We just learned from the news that tele-medicine is really good at remotely telling patients that they will never leave the hospital. With AI it will be a robot telling the patient he will die soon.
Giovanni Ciriani (West Hartford, CT)
Interesting article. Dr. Topol is wrong though, about the fact that machines can't contextualize. One counterexample is machine translation, which has improved tremendously thanks to contextualization techniques. I'm taking AI classes to retrain in this field, and this is shown to be a very powerful improvement. Another technique, which is important for x-ray interpretation, is attention, the ability to understand where the analysis of the picture should concentrate. These two are just examples of where humans can contribute to A.I. and medicine at the same time: determine how we do it, and mimic the behavior with a machine.
John La Puma MD (Santa Barbara)
I love the theme of Dr. Topol's book and work: he is interested in A.I. because it can restore, or at least, honor the doctor-patient relationship rather than replace it. Let A.I. and computer assisted diagnosis do the technical work...great if they're better than what humans can do. Many U.S. physicians are burned out, however. The root causes of burnout include too much bureaucracy and too many hours at work. If A.I. can help physicians again find meaning in our work, and if health systems can promote autonomy and invest resources in creating time for physicians to heal (both ourselves and our patients), then A.I. will be embraced. In other words, A.I. has to improve well-being, of both doctors and patients, not just productivity, efficiency and work flow. Right now to cope with burnout, doctors are exercising, building resilience with nature (immersions, gardening, adventure, more), and sleeping to cope with burnout. Oh, and I like the idea of personalized nutrition for specific (very specific) culinary medical problems...but otherwise, I tell many patients to eat lots of plants and healthy proteins, avoid processed food, sugars and starches, and enjoy their meals, including wine.
Gustav (Durango)
"But it will require substantial activism of the medical community to stand up for patients..." This statement is naive and an anachronism. As a fellow physician, it is clear that Dr.. Topol has not practiced in this country for a while. In the last few years, nurses and physicians have lost any say in the major decisions, Corporate CEOs with MBAs have taken over, and it's all about profits, not patient care. See if your community hospital board has any local control any more. See if your local hospitalists and ER docs are being herded into physician management groups owned by venture capitalists who operate without accountability. See if EMR, the earliest and simplest form of AI, connects all clinics and hospitals or if voice recognitions even works for everyone. Prepare to be shocked. The fox has been in the hen hou$e for a while now.
White Buffalo (SE PA)
@Gustav The first wrong step was substituting hospitalists for primary physicians who knew their patients.
Stephanie (California)
@White Buffalo: Primary physicians may not have much experience with people who are so sick that they need to be hospitalized. I have no problem with hospitalists and don't see how having that specialty was a "wrong step". In addition, if primary care physicians had to see hospital patients, imagine how long you would have to wait for an office appointment.
KBronson (Louisiana)
@Gustav In losing control of their working conditions, physicians are no longer actual professionals. They are zombie professionals. There have been many steps down the path, but the definitive major precipice was when they allowed others to force EMR on them.
Vincent (Alexandria)
The way to bring down health care costs is to eliminate or reduce the role of doctors and insurance companies. Those two entities are responsible for high medical costs. To the extent that AI can perform this task, more power to it. Start out by giving everyone free annual AI-based medical scans. That would be an improvement over Obamacare.
Anjec (Vienna)
@Vincent Personal costs are the main contributors within the system but they are not the cause of exploding HC costs, its simply the fact that the whole system is profit orientatd and in parallel there is almost no competition between hospitals .e.g. This kind of screening you propose is on the contrary an immense costdriver as screeningscans as you propose are generally accompanied with a low specifity. This entails a load of clarification exams of various kinds
Brandon P (Atlanta)
@Vincent Eliminate doctors? The same doctors who do surgeries and procedures, talk to and comfort families, and actually talk to and treat countless patients? Doctors are the backbone of clinical medicine. A computer cant save your life in a critical moment.
Janet (Phila., PA)
@Anjec With all due respect, profit motive by itself does not cause skyrocketing costs. The lack of price transparency is unique to health care: is there any other product or service for which we have no advance cost estimate? I can't think of one. Competition drives down costs, but without price transparency what is the basis for competition?