Several years ago Vinod Khosla, the Silicon Valley investor, wrote a provocative article titled “Do We Need Doctors or Algorithms?” Khosla argued that doctors were no match for artificial intelligence. Doctors banter with patients, gather a few symptoms, hunt around the body for clues, and send the patient off with a prescription. This sometimes (accidentally, maybe) leads to the correct treatment, but doctors are acting on only a fraction of the available information. An algorithm, he wrote, could do better.
I’m a pediatric and adolescent physician in the San Francisco Bay Area, where entrepreneurs like Khosla have been knocking on the doors of doctors for years with their pilot technologies and software and hardware. I can say with some authority that Khosla’s is the voice of a savvy outsider who knows what he knows—which isn’t health care.
Yes, AI could help us diagnose and treat disease. It can collate and serve up broad swaths of data in a clear and concise way, cutting down on the imprecise judgments that doctors make because of the pressures and complexity of our practices. There’s no doubt that for certain doctors, whose work is highly focused on diagnosis (radiologists or pathologists, for example), that breakthrough may prove an existential threat. A decade ago, for example, researchers showed that AI was as good as radiologists at detecting breast cancer.
But for physicians like me in primary care, managing 1,500 to 2,000 patients, AI presents an opportunity. I went to medical school to connect with people and make a difference. Today I often feel like an overpaid bookkeeper instead, taking in information and spitting it back to patients, prescribing drugs and adjusting doses, ordering tests. But AI in the exam room opens up the chance to recapture the art of medicine. It could let me get to know my patients better, learn how a disease uniquely affects them, and give me time to coach them toward a better outcome.
Consider what AI could do for asthma, the most common chronic medical disease in childhood. Six million American kids suffer from it. In 2013, they collectively missed 14 million days of school. The cost of medications, visits to the doctor and emergency room, and hospitalizations nears $60 billion a year.
I diagnose asthma via a rule of thumb that’s been handed down over time: if you’ve had three or more wheezing episodes and the medicines for asthma help, you have the disease. Once it’s diagnosed, I ask the parents to remember—as best they can—how often they administer medicines to their child. I ask: What seems to trigger episodes? Is the child exposed to anyone who smokes at home? I can also review their records to count how many visits to the emergency room they’ve had, or the number of times they’ve refilled their prescriptions.
But even with the most accurate recall by parents and patients, and the most accurate electronic records, it’s still just retrospective knowledge. There’s no proactive, predictive strategy.
It’s not that we don’t have the data; it’s just that it’s messy. Reams of data clog the physician’s in-box. It comes in many forms and from disparate directions: objective information such as lab results and vital signs, subjective concerns that come in the form of phone messages or e-mails from patients. It’s all fragmented, and we spend a great deal of our time as physicians trying to make sense of it. Technology companies and fledging startups want to open the data spigot even further by letting their direct-to-consumer devices—phone, watch, blood-pressure cuff, blood-sugar meter—send continuous streams of numbers directly to us. We struggle to keep up with it, and the rates of burnout among doctors continue to rise.
How can AI fix this? Let’s start with diagnosis. While the clinical manifestations of asthma are easy to spot, the disease is much more complex at a molecular and cellular level. The genes, proteins, enzymes, and other drivers of asthma are highly diverse, even if their environmental triggers overlap. A number of experts now think of asthma in the same way they think of cancer—an umbrella term for a disease that varies according to the tumor’s location and cellular characteristics. Ian Adock of the National Heart & Lung Institute at Imperial College, London, studies the link between asthma and the environment. He and his team have been collecting biological samples from asthma patients’ blood, urine, and lung tissue and organizing the genetic and molecular markers he finds into subtypes of asthma. The hypothesis is that with that kind of knowledge, patients can be given the drug that works best for them.
AI might also help to manage asthma flares. For many patients, asthma gets worse as air pollution levels rise, as happened this past summer when brush fires swept through Northern California. AI could let us take environmental information and respond proactively. In 2015, researchers published a study showing they could predict the number of asthma-related emergency room visits to a Dallas–Fort Worth hospital. They pulled data from patient records, along with air pollution data from EPA sensors, Google searches, and tweets that used terms like “wheezing,” or “asthma.” The Google and Twitter data were tied to the user’s location data.
If I had this kind of data I could say, “Alexa, tell me which asthma patients I need to worry about today.” I could give a heads-up to the affected families. And if I also had some genetic data like Adock’s, I could diagnose asthma before the patient suffered three bouts of wheezing, by ordering blood tests and comparing the results against those molecular markers.
This kind of time-saving intelligence frees me to spend more time with my patients. One study showed that asthmatic children only took or received their inhaled medications about half of the time. AI might allow me more time to personally interact with those kids, and get better results.
Lots of questions lie ahead. Are patients willing to share more of their personal data with us? If the AI shows your care is better one way, but you or your doctor feel differently, will an insurance company accept it? What if the algorithm misses something or is applied incorrectly? Who is liable, the doctor or the machine’s maker?
Not long ago, in the Journal of the American Medical Association, I saw a colorful picture drawn by a child in crayon. It portrayed her pediatrician, eyes glued to the computer, while she sat on the exam table, looking wide-eyed. I hope that AI will soon allow me to turn my attention back to that little girl.
Rahul Parikh is a pediatrician in the San Francisco Bay area.