A lot conventional wisdom about software is mistaken. It's probably a mistake to try to tackle these misconceptions in too much detail in a blog post, but my time here is limited and perhaps a short catalog of common mistakes might help some of you think more critically about the programs you use every day.
Results are what matter We all know that small computers have transformed the workplace. The world of The Apartment and Mad Men has vanished. Companies know that they wouldn't be more profitable if they discarded their PCs and hired lots of secretaries and typists. Yet the productivity gains from using computers have been remarkably hard to identify.
It turns out that lot of the work we do with business computers involves dressing up our ideas to impress managers and clients. Where a typed page was once sufficient, we now dispatch an elegantly typeset document and a deck of presentation slides. This might not help the company serve customers, but it helps individuals impress their managers.
Much of the real contribution that software makes to your thinking happens in the course of the work. What may matter most in the long run are the ideas you discover while preparing a management report or a client presentation. Process matters.
Software should be polished We spend too much time perfecting the way our programs look, just as in the previous century we spent far too much time perfecting our books. We are accustomed to a very high standard of editing and typesetting in publishing, a standard that originally was possible only because a vast number of educated women were for the first time entering the work force and were, for a time, willing to accept very low wages. Today, we look for the same sort of surface polish in our software.
All this polish comes with substantial costs. Some costs are evident because they appear in the price. Others are hidden. How do you measure the cost of terrific software that never gets written, or that remains locked in a laboratory?
Software developers have long struggled to reduce the riskiness of development, its delays and failures, by working to build a software factory that would make software construction more systematic. This hasn't worked well. "We software creators woke up one day," I wrote in 2007, "to find ourselves living in the software factory. The floor is hard, from time to time it gets very cold at night, and they say the factory is going to close and move somewhere else. We are unhappy with our modern computing and alienated from our work, we experience constant, inexorable guilt."
We've been here before. In 1853, John Ruskin inserted a long aside in The Stones Of Venice to advise to the Victorian consumer and art buyer. What sort of things should one buy? Ruskin suggests the following:
1. Never encourage the manufacture of any article not absolutely necessary, in the production of which Invention has no share.
2. Never demand an exact finish for its own sake, but only for some practical or noble end.
3. Never encourage imitation or copying of any kind, except for the sake of preserving record of great works.
Brush marks are not signs of sloth, and pixel misalignments are not an indicator of moral laxity. The software creator should make intention clear, but excessive polish is slave's work unredeem'd.
Software should be friendly
The program is not your friend. It does not understand you, or care about you.
Computers should be intuitive
We are often told that computers should be information appliances, that you don't need to know about anything under the hood. Many things we want to do, however, are far from simple; the real work of real people is surprisingly complex. Learning to use tools well sometimes takes times, but you are only a beginner once and you may use your tools every day.
Programs should never crash, hang, or do surprising things
Homer nods, and most of us aren't Homer. Human collaborators sometimes make mistakes, lose things, or drop them on the floor. With computers as well as people, take sensible precautions and hope for the best.
On her first trip to New Mexico, Linda was astonished to find that National Park trails frequently ran close beside spectacular cliffs, with no guard rails in sight. Back east, you'd put up a guard rail and spoil the view -- or you might close the trail because it might be dangerous. If we do not trust users, we deprive people of abilities they need.
No one wants to read on screens
People still say this, even though we spend our days reading and writing on the screen. It is now clear that the future of serious reading and writing lies on screens and on the displays that will replace them.
Hypertext is distracting; the Internet is ruining kids today
Life is distracting. Ideas are complicated and densely interconnected. There is too much to do and we have too little time. Kids know this, too, and make choices accordingly.
Computers don't wear out
Computers you depend on last three years, laptops a bit less. A three-year-old computer, even if in pristine condition, is sufficiently obsolete that replacing it is nearly mandatory. If you don't use your computer much, or you want to use an old computer for an occasional chore, you can keep it for a few years more.
Web pages should (or can) say one thing, and should mean what they say
Dreams of the semantic Web often rest on the assumption that we can (and will) express the meaning of a Web page in a simple and concise format. Everything we know about writing, everything we know about meaning, suggests this is a fantasy.
In despair over their perception of the intellectual dishonesty of the Bush administration and the epistemic closure of the American Right, Jed Buchwald and Diane Greco Josefowicz wrote The Zodiac of Paris. It's describes once-famous controversies in the early 19th century over some Egyptian inscription that suggested the world was older than Genesis allows.
The book is, in a very real sense, about the lies Curveball told Colin Powell, but that meaning is not on the page.
Steve Jobs matters
The American business press is obsessed with CEOs. If a stock increases, the firm's leaders are brilliant fellows. If shares plummet, the CEO must be a buffoon. Steve Jobs, once regarded as a fool, is now hailed as the one true software visionary, the indispensible force.
Jobs is, in fact, a good software critic and an executive who is willing to trust his judgment and endure the consequences.
The customer, the usability lab, or the marketplace will tell you what is good; crowds are wise
From the user-generated content of Wikipedia to mass recommendation systems and user-written product reviews, my colleagues assume that crowds are wise and that, on average, sensible opinions prevail. That this is often true is fortunate, but crowds can be wildly wrong..
Almost all software designers believe that customers, clinical studies, or the marketplace will reveal what works and what doesn't, but everything we know about art (and software is an art form) argues this cannot be right. Best-seller lists sometimes contain good books, but they list bad books aplenty. Popular movies are not always great.
We know that an intelligent critic can sometimes recognize a great work when she sees it. No individual's taste or judgment is infallible, but the marketplace is often wrong, too.
James Fallows is a national correspondent for The Atlantic and has written for the magazine since the late 1970s. He has reported extensively from outside the United States and once worked as President Carter's chief speechwriter. His latest book is China Airborne.
Republicans are split on how to balance broad participation against the efficient functioning of the institution.
In 1910, the Republican Party was in crisis. Ray Stannard Baker posed the question, “Is the Republican Party Breaking Up?” in the pages of The American Magazine. Baker described a struggle between the “most unyielding of the Regulars” and those the party leaders dismissed as “a factional disturbance to be crushed out … mutineers.” Locked in mortal battle, the Republicans fractured in 1912, losing both the White House and the Congress to Democrats.
It would seem from watching the current maelstrom within the House Republican Conference that history is repeating itself. As Yogi Berra might have put it: “déjà vu all over again.”
“We should be fighting the Democrats—not the Republicans,” Tea Party leader Raúl Labrador declared. “We shouldn't be fighting each other.” But the rebellion against House Speaker John Boehner, the inability to legislate, and the unanticipated implosion of Kevin McCarthy all suggest a party wracked by division and self-doubt.
Before it became the New World, the Western Hemisphere was vastly more populous and sophisticated than has been thought—an altogether more salubrious place to live at the time than, say, Europe. New evidence of both the extent of the population and its agricultural advancement leads to a remarkable conjecture: the Amazon rain forest may be largely a human artifact
The plane took off in weather that was surprisingly cool for north-central Bolivia and flew east, toward the Brazilian border. In a few minutes the roads and houses disappeared, and the only evidence of human settlement was the cattle scattered over the savannah like jimmies on ice cream. Then they, too, disappeared. By that time the archaeologists had their cameras out and were clicking away in delight.
Below us was the Beni, a Bolivian province about the size of Illinois and Indiana put together, and nearly as flat. For almost half the year rain and snowmelt from the mountains to the south and west cover the land with an irregular, slowly moving skin of water that eventually ends up in the province's northern rivers, which are sub-subtributaries of the Amazon. The rest of the year the water dries up and the bright-green vastness turns into something that resembles a desert. This peculiar, remote, watery plain was what had drawn the researchers' attention, and not just because it was one of the few places on earth inhabited by people who might never have seen Westerners with cameras.
Science says lasting relationships come down to—you guessed it—kindness and generosity.
Every day in June, the most popular wedding month of the year, about 13,000 American couples will say “I do,” committing to a lifelong relationship that will be full of friendship, joy, and love that will carry them forward to their final days on this earth.
Except, of course, it doesn’t work out that way for most people. The majority of marriages fail, either ending in divorce and separation or devolving into bitterness and dysfunction. Of all the people who get married, only three in ten remain in healthy, happy marriages, as psychologist Ty Tashiro points out in his book The Science of Happily Ever After, which was published earlier this year.
Social scientists first started studying marriages by observing them in action in the 1970s in response to a crisis: Married couples were divorcing at unprecedented rates. Worried about the impact these divorces would have on the children of the broken marriages, psychologists decided to cast their scientific net on couples, bringing them into the lab to observe them and determine what the ingredients of a healthy, lasting relationship were. Was each unhappy family unhappy in its own way, as Tolstoy claimed, or did the miserable marriages all share something toxic in common?
The standard conception of the disorder is based on studies of "hyperactive young white boys." For females, it comes on later, and has different symptoms.
When you live in total squalor—cookies in your pants drawer, pants in your cookies drawer, and nickels, dresses, old New Yorkers, and apple seeds in your bed—it’s hard to know where to look when you lose your keys. The other day, after two weeks of fruitless searching, I found my keys in the refrigerator on top of the roasted garlic hummus. I can’t say I was surprised. I was surprised when my psychiatrist diagnosed me with ADHD two years ago, when I was a junior at Yale.
In editorials and in waiting rooms, concerns of too-liberal diagnoses and over-medication dominate our discussions of attention deficit hyperactivity disorder, or ADHD. The New York Timesrecently reported, with great alarm, the findings of a new Centers for Disease Control and Prevention study: 11 percent of school-age children have received an ADHD diagnosis, a 16 percent increase since 2007. And rising diagnoses mean rising treatments—drugs like Adderall and Ritalin are more accessible than ever, whether prescribed by a physician or purchased in a library. The consequences of misuse and abuse of these drugs are dangerous, sometimes fatal.
No defensible moral framework regards foreigners as less deserving of rights than people born in the right place at the right time.
To paraphrase Rousseau, man is born free, yet everywhere he is caged. Barbed-wire, concrete walls, and gun-toting guards confine people to the nation-state of their birth. But why? The argument for open borders is both economic and moral. All people should be free to move about the earth, uncaged by the arbitrary lines known as borders.
Not every place in the world is equally well-suited to mass economic activity. Nature’s bounty is divided unevenly. Variations in wealth and income created by these differences are magnified by governments that suppress entrepreneurship and promote religious intolerance, gender discrimination, or other bigotry. Closed borders compound these injustices, cementing inequality into place and sentencing their victims to a life of penury.
Is there anything inherently “doggy” about the word “dog”? Obviously not—to the French, a dog is a chien, to Russians a sobaka, to Mandarin Chinese-speakers a gǒu. These words have nothing in common, and none seem any more connected to the canine essence than any other. One runs up against that wall with pretty much any word.
Except some. The word for “mother” seems often either to be mama or have a nasal sound similar to m, like nana. The word for “father” seems often either to be papa or have a sound similar to p, like b, in it—such that you get something like baba. The word for “dad” may also have either d or t, which is a variation on saying d, just as p is on b. People say mama or nana, and then papa, baba, dada, or tata,worldwide.
The Islamic State has made enemies of most of the world. So how is it still winning?
Nearly two millennia ago, the Romans built the Arch of Triumph in Palmyra, Syria. According to Picturesque Palestine, Sinai, and Egypt, published in 1881, “The wonder in these ancient ruins is not that so much has fallen, but that anything remains.” Last week, ISIS blew the Arch of Triumph, which the group considers idolatrous, to pieces. Such acts of aggression and barbarism have mobilized a vast enemy coalition, which includes almost every regional power and virtually every great power (and notably the United States, often compared to the Roman Empire in its hegemonic strength). Yet, incredibly, this alliance seems incapable of rolling back the Islamic State. How can a group of insurgents declare war on humanity—and win?
Even in big cities like Tokyo, small children take the subway and run errands by themselves. The reason has a lot to do with group dynamics.
It’s a common sight on Japanese mass transit: Children troop through train cars, singly or in small groups, looking for seats.
They wear knee socks, polished patent-leather shoes, and plaid jumpers, with wide-brimmed hats fastened under the chin and train passes pinned to their backpacks. The kids are as young as 6 or 7, on their way to and from school, and there is nary a guardian in sight.
A popular television show called Hajimete no Otsukai, or My First Errand, features children as young as two or three being sent out to do a task for their family. As they tentatively make their way to the greengrocer or bakery, their progress is secretly filmed by a camera crew. The show has been running for more than 25 years.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
Why Millennials aren’t buying cars or houses, and what that means for the economy
In 2009, Ford brought its new supermini, the Fiesta, over from Europe in a brave attempt to attract the attention of young Americans. It passed out 100 of the cars to influential bloggers for a free six-month test-drive, with just one condition: document your experience online, whether you love the Fiesta or hate it.
Young bloggers loved the car. Young drivers? Not so much. After a brief burst of excitement, in which Ford sold more than 90,000 units over 18 months, Fiesta sales plummeted. As of April 2012, they were down 30 percent from 2011.
Don’t blame Ford. The company is trying to solve a puzzle that’s bewildering every automaker in America: How do you sell cars to Millennials (a k a Generation Y)? The fact is, today’s young people simply don’t drive like their predecessors did. In 2010, adults between the ages of 21 and 34 bought just 27 percent of all new vehicles sold in America, down from the peak of 38 percent in 1985. Miles driven are down, too. Even the proportion of teenagers with a license fell, by 28 percent, between 1998 and 2008.