Nanosensors patrolling your bloodstream for the first sign of an imminent stroke or heart attack, releasing anticlotting or anti-inflammatory drugs to stop it in its tracks. Cell phones that display your vital signs and take ultrasound images of your heart or abdomen. Genetic scans of malignant cells that match your cancer to the most effective treatment.
In cardiologist Eric Topol’s vision, medicine is on the verge of an overhaul akin to the one that digital technology has brought to everything from how we communicate to how we locate a pizza parlor. Until now, he writes in his upcoming book The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care, the “ossified” and “sclerotic” nature of medicine has left health “largely unaffected, insulated, and almost compartmentalized from [the] digital revolution.” But that, he argues, is about to change.
Digital technologies, he foresees, can bring us true prevention (courtesy of those nanosensors that stop an incipient heart attack), individualized care (thanks to DNA analyses that match patients to effective drugs), cost savings (by giving patients only those drugs that help them), and a reduction in medical errors (because of electronic health records, or EHRs). Virtual house calls and remote monitoring could replace most doctor visits and even many hospitalizations. Topol, the director of the Scripps Translational Science Institute, is far from alone: e-health is so widely favored that the 2010 U.S. health-care reform act allocates billions of dollars to electronic health records in the belief that they will improve care.
Anyone who has ever been sick or who is likely to ever get sick—in other words, all of us—would say, Bring it on. There is only one problem: the paucity of evidence that these technologies benefit patients. Topol is not unaware of that. The eminently readable Creative Destruction almost seems to have two authors, one of them a rigorous, hard-nosed physician/researcher who insightfully critiques the tendency to base treatments on what is effective for the average patient. This Topol cites study after study showing that much of what he celebrates may not benefit many individual patients at all. The other author, however, is a kid in the electronics store whose eyes light up at every cool new toy. He seems to dismiss the other Topol as a skunk at a picnic.
Much of the enthusiasm for bringing the information revolution to medicine reflects the assumption that more information means better health care. Actual data offer reasons for caution, if not skepticism. Take telemonitoring, in which today’s mobile apps and tomorrow’s nanosensors would measure blood pressure, respiration, blood glucose, cholesterol, and other physiological indicators. “Previously, we’ve been able to assess people’s health status when they came in to a doctor’s office, but mobile and wireless technology allow us to monitor and track important health indicators throughout the day, and get alerts before something gets too bad,” says William Riley, program director at the National Heart, Lung & Blood Institute and chairman of a mobile health interest group at the National Institutes of Health. “Soon there won’t be much that we can’t monitor remotely.”
Certainly, it is worthwhile to monitor blood pressure, glucose, and other indicators; if nothing else, having regular access to such data might help people make better choices about their health. But does turning the flow of data into a deluge lead to better results on a large scale? The evidence is mixed. In a 2010 study of 480 patients, telemonitoring of hypertension led to larger reductions in blood pressure than did standard care. And a 2008 study found that using messaging devices and occasional teleconferencing to monitor patients with chronic conditions such as diabetes and heart disease reduced hospital admissions by 19 percent. But a 2010 study of 1,653 patients hospitalized for heart failure concluded that “telemonitoring did not improve outcomes.” Similarly, a recent review of randomized studies of mobile apps for smoking cessation found that they helped in the short term, but that there is insufficient research to determine the long-term benefits. Given the land rush into mobile health technologies, or “m-health,” the lack of data on their helpfulness raises concerns. “People are putting out systems and technologies that haven’t been studied,” says Riley.
These concerns also apply to technologies we don’t have yet, like those nanosensors in our blood. For instance, studies have reached conflicting conclusions about whether diabetics benefit from aggressive glucose control—something that could be provided by nanosensors paired with insulin delivery devices. Several studies have found that it can lead to hypoglycemia (dangerously low levels of blood glucose) and does not reduce mortality in severely ill diabetics. And sensors may be no better at detecting incipient cancers or heart attacks. If the ongoing debate about overdiagnosis of breast and prostate cancer has taught us anything, it should be that an abnormality that looks like cancer might not spread or do harm, and therefore should not necessarily be treated. For heart attacks, we need rigorous clinical trials establishing the rate of false positives and false negatives before we start handing out nanosensors like lollipops.
EHRs also seem like a can’t-miss advance: corral a patient’s history in easily searched electrons, rather than leaving it scattered in piles of paper with illegible scribbles, and you’ll reduce medical errors, minimize redundant tests, avoid dangerous drug interactions (the system alerts the prescriber if a new prescription should not be taken with an existing one), and ensure that necessary exams are done (by reminding a physician to, say, test a diabetic’s vision).
In practice, however, the track record is mixed. In one widely cited study, scientists led by Jeffrey Linder of Harvard Medical School reported in 2007 that EHRs were not associated with better care in doctor’s offices on 14 of 17 quality indicators, including managing common diseases, providing preventive counseling and screening tests, and avoiding potentially inappropriate prescriptions to elderly patients. (Practices that used EHRs did do better at avoiding unnecessary urinalysis tests.) Topol acknowledges that there is no evidence that the use of EHRs reduces diagnostic errors, and he cites several studies that, for instance, found “no consistent association between better quality of care and [EHRs].” Indeed, one disturbing study he describes showed that the rate of patient deaths doubled in the first five months after a hospital computerized its system for ordering drugs.
Financial incentives threaten another piece of Topol’s vision. Perhaps the most promising path to personal medicine is pharmacogenomics, or using genetics to identify patients who will—or will not—benefit from a drug. Clearly, the need is huge. Clinical trials have shown that only one or two people out of 100 without prior heart disease benefit from a certain statin, for instance, and one heart attack victim in 100 benefits more from tPA (tissue plasminogen activator, a genetically engineered clot-dissolving drug) than from streptokinase (a cheap, older clot buster). Genetic scans might eventually reveal who those one or two are. Similarly, as Topol notes, only half the patients receiving a $50,000 hepatitis C drug, and half of those taking rheumatoid arthritis drugs that ring up some $14 billion in annual sales, see their health improve on these medications. By preëmptively identifying who’s in which half, genomics might keep patients, private insurers, and Medicare from wasting tens of billions of dollars a year.
Yet despite some progress in matching cancer drugs to tumors, pharmacogenomics “has had limited impact on clinical practice,” says Joshua Cohen of the Tufts Center for the Study of Drug Development, who led a 2011 study of the field. Several dozen diagnostics are in use to assess whether patients would benefit from a specific drug, he estimates; one of the best-known analyzes breast cancers to see if they are fueled by a mutation in the her2 protein, which means they are treatable with Herceptin. But insurers still doubt the value of most such tests. It’s not clear that testing everyone who’s about to be prescribed a drug would save money compared with giving it to all those patients and letting the chips fall where they may.
Genotyping is not even routine in clinical trials of experimental cancer drugs. As Tyler Jacks, an MIT cancer researcher, recently told me, companies “run big dumb trials” rather than test drugs specifically on patients whose cancer is driven by the mutation the drug targets. Why? Companies calculate that it is more profitable to test these drugs on many patients, not just those with the mutation in question. That’s because although a new drug might help nearly all lung cancer patients with a particular mutation, a research trial might indicate that it helps—just to make up a number—30 percent of lung cancer patients as a whole. Even that less impressive number could be enough for Food and Drug Administration approval to sell the drug to everyone with lung cancer. Limiting the trial to those with the mutation would limit sales to those patients. The risk that the clinical trial will fail is more than balanced by the chance to sell the drug to millions more people.
Such financial considerations are not all that stands in the way of Topol’s predictions. He and other enthusiasts need to overcome the lack of evidence that cool gadgets will improve health and save money. But though he acknowledges the “legitimate worry” about adopting technologies before they have been validated, his cheerleading hardly flags. “The ability to digitize any individual’s biology, physiology, and anatomy” will “undoubtedly reshape” medicine, he declares, thanks to the “super-convergence of DNA sequencing, mobile smart phones and digital devices, wearable nanosensors, the Internet, [and] cloud computing.” Only a fool wouldn’t root for such changes, and indeed, that’s why Topol wrote the book, he says: to inspire people to demand that medicine enter the 21st century. Yet he may have underestimated how much “destruction” will be required for that goal to be realized.
Sharon Begley, a former science columnist at Newsweek and the Wall Street Journal, is a contributing writer for Newsweek and its website, the Daily Beast.