You think too much

I very much enjoyed Jerome Groopman's book, How Doctors Think. I love his writing at the New Yorker. But I am afraid I didn't think very much of the book's thesis, which is that doctors need to improve their clinical judgement rather than relying on evidence based medicines and statistics. "People are not statistics" is sloppy thinking; most of the time, we are. And there's substantial evidence that doctors do best when they treat their patients by the numbers.

Over at eSkeptic, Charles Lambdin voices the same criticism:

Groopman tells us he is troubled that new doctors seem to be trained to “think like computers,” that they rely on diagnostic decision aids and some seductive “boiler-plate scheme” called evidence-based medicine. Groopman’s position, when his various arguments are gathered and assembled, becomes untenable. He admits doctors suffer from innumerable biases that diminish the accuracy of diagnosis, reducing many diagnoses to idiosyncratic responses fueled by mood, whether the patient is liked or disliked, advertisements recently seen, etc. Thus Groopman agrees with decision scientists’ diagnosis of doctor decision making; but then he goes on to wantonly dismiss what many of the very same researchers claim is the best (and perhaps only) remedy, the way to “debias” diagnosis: evidence-based medicine and the use of decision aids. In place of statistics what does Groopman suggest doctors rely on? Clinical intuition of course, the very source of the cognitive biases he pays lip service to throughout his book.

. . .

Most doctors do not like decision aids. They rob them of much of their power and prestige. Why go through medical school and accrue a six-figure debt if you’re simply going to use a computer to make diagnoses? One study famously showed that a successful predictive instrument for acute ischemic heart disease (which reduced the false positive rate from 71% to 0) was, after its use in randomized trials, all but discarded by doctors (only 2.8% of the sample continued to use it). It is no secret many doctors despise evidence-based medicine. It is impersonal “cookbook medicine.” It is “dehumanizing,” treating people like statistics. Patients do not like it either. They think less of doctors’ abilities who rely on such aids.

The problem is that it is usually in patients’ best interest to be treated like a “statistic.” Doctors cannot outperform mechanical diagnoses because their own diagnoses are inconsistent. An algorithm guarantees the same input results in the same output, and whether one likes this or not, this maximizes accuracy. If the exact same information results in variable and individual output, error will increase. However, the psychological baggage associated with the use of statistics in medicine (doctors’ pride and patients’ insistence on “certainty”) makes this a difficult issue to overcome.

The statistics vs. clinical intuition debate has ensued for decades in psychology. Where one sides in the debate is largely determined by what one makes of a single phrase: “Group statistics don’t apply to individuals.” This claim, widely believed, ignores many of the most basic concepts of probability and statistics, such as error. Yes, individuals possess unique qualities, but they also share many features that allow for predictive power.8 If 95% of a sample with quality X has quality Y, insisting that someone with quality X may not have Y because “statistics don’t apply to individuals” will only decrease accuracy. Insistence on certainty decreases accuracy. As Groopman himself says, the perfect is the enemy of the good.

. . .

Physicians who allow themselves to think in such discretionary ways can find “exceptions” everywhere they look, and, augmenting a decision aid as they see fit, will only end up lowering its overall diagnostic accuracy. Why? Because human beings do not apply rules consistently. Mechanical procedures always lead to the same conclusion from the same input. Doctors are subject to random fluctuations in diagnosis caused by judgmentally-irrelevant factors including availability, priming, recency effects, inconsistent weighting of information, fatigue, etc., all of which reduce accuracy. What leads to a correct decision for one case may not for another, and variables that contribute to the diagnosis made may actually be uncorrelated with it.

This is hardly restricted to doctors. Every profession resists being told that there is a standard way to do things, that a cookie cutter can cut better than their skilled hand. Journalists famously hate the "inverted U" style of writing a news story, even though it really does seem to work better than anything else; it's boring to write, and leaves no room for individual style. Teachers don't like "teaching to the test" or rigidly programmed phonics curricula, even though the latter produces measurably better results than all but the very best teachers. Unfortunately, for many of us, it may be time to welcome our new robot overlords.

Presented by

Megan McArdle is a columnist at Bloomberg View and a former senior editor at The Atlantic. Her new book is The Up Side of Down.

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Video

The 86-Year-Old Farmer Who Won't Quit

A filmmaker returns to his hometown to profile the patriarch of a family farm

Video

Riding Unicycles in a Cave

"If you fall down and break your leg, there's no way out."

Video

Carrot: A Pitch-Perfect Satire of Tech

"It's not just a vegetable. It's what a vegetable should be."

Video

The Benefits of Living Alone on a Mountain

"You really have to love solitary time by yourself."

More in Business

Just In