PAGING DR. WATSON?
In March, Jonathan Cohn examined whether doctors will become obsolete as technology revolutionizes—and increasingly automates—health care. Sherlock Holmes, the master of observation, had an intellectual catalyst in the comparatively prosaic and empathetic Dr. Watson. Physicians, at their best, are observers, drawing clues from a patient’s history, physical exam, and lab results. With the ever-expanding tome of medical knowledge driving further subspecialization, there are, invariably, knowledge gaps. Fortunately, as demonstrated by Jonathan Cohn’s article, computers can help fill these gaps and free physicians to actually treat the patient (which has, in many patients’ eyes, become an afterthought to their overworked, robotic doctors). It’s time physicians embrace their own Watson. Sir Arthur Conan Doyle, himself a doctor, would surely agree.
Marshall Dines, M.D., M.B.A.
Los Angeles, Calif.
Automation is the key disruptive innovation needed to improve care and lower medical costs? Really? Although automation can often guide smarter clinical decisions and steer dollars more efficiently through the maze of insanely complex reimbursement rules, it isn’t likely to be the solution to our health-care woes—not least because human interaction and compassion are essential to effective healing, and cannot be automated.
Robots, however, may be the only ones willing to deliver care in the future, because doctors are opting out. Data show that disaffection among physicians is at an all-time high: in one 2012 study, nearly half of 5,000 surveyed physicians said they hoped to be out of medicine within five years; nine out of 10 said they would not recommend entering the profession. Their reasons? Uncertainty and dysfunction in the system, not fear of automation. From many physicians’ perspectives, medicine is no longer a profession but a commodity; its central purpose is not improving the health of patients but maintaining the health of the system. Today’s doctors and patients often do not develop meaningful, trusting relationships but merely conduct transactions (“billable events”).
Rather than replacing physicians, perhaps a more helpful aim for automation would be to untangle the current information-and-payment mess, to allow doctors to deliver care more effectively and efficiently. One hopes that our resource-rich country, which has arguably the world’s best medical training and capacity for innovation, can regain its footing and refocus to serve the needs and best interests of patients.
Frank M. Reed, M.D.
Missoula, Mont.
The increasing use of computer-based medicine (CBM) is rarely driven by providers. A considerable impetus is political diktat, thanks to bountiful donations from rich Silicon Valley enterprises to political campaigns. The introduction of CBM on a broad scale may well result in better clinical outcomes, but only at a significant financial cost. Robots will broaden the differential-diagnoses list available to clinicians and thus, given the current and seemingly uncontrollable medical-malpractice environment, compel further diagnostic testing, with the concomitant costs and risks of false positives. False or misinterpreted lab testing will then spawn more and often dangerous, not to mention costly, diagnostic testing.
Barry Kisloff, M.D.
Pittsburgh, Pa.
I see The Atlantic has taken its monthly trip into medical cuckoo-land. Yes, of course, computers can do anything, including hit home runs and sink basketball shots, and can do these things better than people can do them. However, the idea is a long way from reality. When I was a young man in my fellowship 35 years ago, one of the other fellows in the program decided to drop out—he felt that computers would soon make doctors redundant. So this idea has been around awhile. The problem is not only that programming computers to practice medicine is very difficult; it is also that doctors spend little time just pondering diagnostic possibilities. Most of their time is spent on data entry and patient assessment.
The real potential addressed in this article has nothing to do with computers. Cohn proposes that cost savings can be realized by using less-well-trained personnel than doctors. Well, we can do that right now. In fact, the way Europeans get by so cheaply is by giving their doctors no more training than an American nurse-practitioner receives. Just start calling nurse-practitioners doctors and forget the computers.
People who believe that a computer will not miss diagnoses fail to understand how much serendipity is involved in hitting upon the right diagnosis in the first place.
Diodotus
TheAtlantic.com comment
Why Obama Really Matters
In March’s “The Emancipation of Barack Obama,” Ta-Nehisi Coates posited that the president’s reelection matters even more than his election because it repudiates the history of violence and legal maneuvering designed to strip blacks of privileges. A reader offers another reason. For a long time, many people—myself included—worried that if Obama was voted out of office, the Democratic Party would not nominate another black person for the foreseeable future. That’s one reason many of us wanted Obama to win reelection, beyond the political and policy implications that are important in their own right. The election of Barack Obama was a milestone in the history of race relations in this country, but his reelection was probably necessary for the message to fully sink in. And in a way, it was ideal that his victory wasn’t some foregone landslide—the fact that it surprised so many people has probably helped serve as a wake-up call.
Kylopod
TheAtlantic.com comment
education and economics
In March’s Chartist dispatch, Nicole Allan and Derek Thompson studied “The Myth of the Student-Loan Crisis.” The authors try to justify the 150 percent increase in college costs since 1995 and the 300 percent jump in college-loan debt since 2003 with a variety of misleading graphs and statistics.
First of all, comparing the return on a college degree to the return on stocks or on gold is not valid. You can invest money in the stock market or gold and let the investment do the rest. With college, you have to invest money and put in years of work, while at the same time giving up other pursuits that may have economic returns, such as starting a business. And after college, the return on your investment will, to a large extent, depend on the ongoing effort that you put in throughout your career.
The authors correctly point out that a college education, as well as a high-school education, can be very important to an individual’s success. So I am thankful that our local K–12 schools have kept their costs in line with inflation and do not use the importance of their work to justify a similar 150 percent increase in costs and 300 percent increase in local debt. Because in the long run, managing costs is essential to making education accessible and sustainable for future generations of students.
Brian Schneider
Menomonee Falls, Wis.
“The Myth of the Student-Loan Crisis” presented an infographic on college debt that was long on graphics and short on info. Yet even where the authors are committed to denying there is a crisis, the data poke through the happy-talk narrative.
First, as the authors note, students attending Harvard leave with less debt, on average, than students attending other schools, which is to say that the current system works better for elites than it does for the rest of the population. (Who could have predicted such a thing?)
Most of the numbers on that page, in fact, are skewed by the elites. The current system works so well for students in the top 5 percent or so of institutions that talking about averages masks problems of affordability and quality for the 10 million or so students in the bottom half of the rankings. The crisis in student debt isn’t at Harvard. It’s at Houston Community College and the University of Phoenix.
Then there’s the suggestion that poor people might be “scared off” by educational loans. Not to put too fine a point on it, but college loans have become toxic, offering the debtor the same kinds of confusing choices that characterized the U.S. mortgage market. Like mortgages, many of the available options are, in the long term, bad for the debtor, causing interest payments to balloon later. Unlike mortgages, however, the holders remain liable, even if they go bankrupt. Poor families are right to be scared. Middle-class families should be as well.
The authors mention, then ignore, the 300 percent increase in outstanding debt in the past decade, choosing to show only a snapshot of cost and value, rather than the trend over time (perhaps because the trend is so grim). The average cost of a bachelor’s degree rose faster than inflation last year, while its average value fell. This has happened every year of this century; every figure in the Chartist will be worse next year.
These implacable trends are gradually cutting off a growing cohort from a good but affordable college education. It seems odd not to call this a crisis.
Clay Shirky
New York, N.Y.
Nicole Allan and Derek Thompson reply:
Clay Shirky is right about so much, it’s remarkable that he reached the opposite conclusion we did. He argues that the crisis in student debt is concentrated at non-elite schools, including for-profit colleges like the University of Phoenix. We agree—for-profit students account for almost half of defaulters. But because they make up about one in eight undergraduate and graduate students overall, their impact on the national student-debt picture is easily exaggerated.
Shirky notes that student loans aren’t like mortgages. He’s right, and that’s a good thing. Income-based repayment policies and 20-year forgiveness plans make it unlikely that we’ll see anything like a student-loan version of the 2007 housing crash.
Shirky also says that college doesn’t pay off as much as it used to. But the college premium—the “bonus” workers get from attending school—has never been higher. So we find it reckless to tell young people that they have a better chance of earning a living wage if they cut short their education. Like Shirky, we hope that reforms will slow the growth of tuition inflation (which is fodder for another infographic altogether). In the meantime, let’s devote our energies to boosting cheap, high-quality schools and shaming expensive, poor-quality ones.
INVENTING MARILYN
In March, Caitlin Flanagan delved into Marilyn Monroe’s True Hollywood Story. My only disagreement with Caitlin Flanagan’s fascinating essay is that she credits the wrong individuals with creating the iconic Marilyn Monroe. She states that Norman Mailer and Elton John resurrected the Marilyn we now all “know.” In fact, Andy Warhol “invented” Marilyn, as early as the month of her death, August 1962.
The Marilyn image most of us conjure up is the subject of Warhol’s many paintings and prints, based on a photograph of Marilyn taken by Gene Kornman as a publicity shot for the 1953 film Niagara. The Marilyn paintings and prints were shown in museums and galleries in the U.S. and abroad throughout the 1960s, years before Norman Mailer’s book. In addition, Warhol’s image of Marilyn quickly entered the world of advertising. As an art student in the late 1960s, I witnessed more art using Warhol’s Marilyn image than anything before or since: on quilts, paintings, ceramics, jewelry, prints. In short, it was everywhere. Along with Warhol’s Campbell’s Soup Cans, the Marilyn images practically defined pop art. She was a visual icon by the mid‑’60s. Even Clare Boothe Luce wrote an exposé of Marilyn in Life magazine in 1964.
The cult of Marilyn was born in the ’60s, largely because of the Warhol image. He captured her fragility, vulnerability, and commodification, which became the stuff of romantic stories and myths almost immediately. I know—I was there.
Patricia Nelson
Professor of Art, Ball State University
Muncie, Ind.
FINDING OUR WAY
An astute reader connects the dots between a January/February dispatch about a belated effort to name anonymous rural roads, and the same issue’s technology column, about Google’s effect on mapping and travel. I loved the juxtaposition of Deirdre Mask’s “Where the Streets Have No Name” and Michael Jones’s talk with James Fallows in “The Places You’ll Go.” It is rare to find two voices in the same magazine that illustrate the far edges of a topic like modern cartography.
The complexities of privacy and the infiltration of technology have redefined how we find each other. One of Google’s camera cars must be taking on the West Virginia back roads to get pictures of the new Dog Bone Drive. Even with satellite technology, it still might make a wrong turn. Perhaps some of the good old boys will help it. Or maybe the back country should be one of the places where you can still get lost.
Maps and addresses are a small part of connectivity. The places where you can’t find your way without engaging a person are diminishing. We might have found our way, but what have we lost in the journey?
Ralph Walker
Bloomfield, N.J.
Q: What day most changed the course of history?
In response to our new back-page feature, readers answered the Big Question via email, online, and on Twitter. 753 b.c.: Rome is founded.
The birth of Jesus
May 29, 1453: The fall of Constantinople
March 15, 1493: Christopher Columbus returns to Spain from his first voyage to the Americas.
Oct. 31, 1517: Martin Luther nails his Ninety-Five Theses to a church door.
Aug. 27, 1859: Drake’s oil well is tapped, setting off the oil and growth booms we’ve been on since.
Nov. 24, 1859: Charles Darwin publishes On the Origin of Species.
Feb. 24, 1921: The birth of Abe Vigoda
Sept. 28, 1928: Alexander Fleming discovers penicillin.
July 16, 1945: The first atomic-bomb test is conducted near Alamogordo, New Mexico.
Oct. 14, 1947: Chuck Yeager breaks the sound barrier.
May 10, 1960: The FDA approves Enovid for use as a birth-control pill.
Jan. 22, 1970: The first passenger flight of the Boeing 747 leaves New York for London.
Nov. 7, 2000 Dec. 12, 2000: George W. Bush is elected president.