Whether it's the 1930s or the 2010s, depressions are the only casualties in a currency war
I don't know how it compares to peeing in your bed, as one anonymous senior Fed official put it, but a currency war is one of the surest ways to end a global slump. Despite what you may have heard, it was a big part of what stopped the vicious circle of the Great Depression.
Currency wars are the best type of wars. Nobody dies, and everybody can recover, as long as everybody plays along. Here's how it works. One country devalues its currency -- in other words, prints money -- which, in a time of weak global demand, puts pressure on other countries to do the same, lest they lose out on trade. Then another country devalues, and so on, in a cascade of looser money. It's the invisible hand pushing for expansionary monetary policy when it's needed most.
But there are a few caveats. For one, a currency war only makes sense during a global depression when short-term interest rates are mostly stuck at zero. It's about boosting monetary stimulus when conventional methods are out of ammo. For another, devaluing forever (a là China) is not a sustainable growth strategy. It might make sense for developing nations to subsidize export industries early on, but, eventually, this will only cause imbalances to build up, while robbing the domestic population of purchasing power. And finally, there's a risk that a currency war could turn into a trade war. In other words, countries will retaliate to expansionary monetary policy not with expansionary monetary policy of their own, but with tariffs. Presumably that's what our silver-tongued senior Fed official was getting at with this head-scratcher of a quote:
Devaluing a currency is like peeing in bed. It feels good at first, but pretty soon it becomes a real mess.
This fear of a currency war begetting a trade war is certainly serious, but it's made to sound more serious thanks to some bad history. Here's the erroneous story you might have heard (especially now that Japan's talk of more aggressive easing has revived fears of a currency war):
After the Great Crash of 1929, countries abandoned the gold standard and devalued their currencies in a beggar-thy-neighbor battle to the bottom. This currency war turned into a trade war, with countries eventually resorting to tariffs and counter-tariffs, as they tried to grab a hold on an ever-shrinking pie of demand. The consequent collapse in world trade is what made the Great Depression so great, and set the stage for the trade war to turn into an actual one.
Scary stuff. But not quite true. The reality is the trade war started before the currency war, and the latter jump-started recovery wherever it was tried. The infamous Smoot-Hawley tariff in the U.S., the first salvo in the trade war to come, was actually passed in June 1930, more than a full year before any country devalued its currency. It wasn't until September 1931 that Britain abandoned the gold standard ... and that's when things get a bit complicated. It's hard to accuse Britain of "competitive" devaluation here, because it had no choice but devaluation; it had simply run out of gold. Nonetheless, other countries responded to Britain's increased competitiveness by increasing their trade barriers; in this case, the currency war, such as it was, did exacerbate the ongoing trade war, as Gavyn Davies of the Financial Times points out.
But then a funny thing happened. The punishment for Britain's economic weakness was a recovery. Ditching gold gave Britain (and everybody else who did so) the freedom to pursue more aggressive monetary and fiscal policies than the "rules of the game" of the gold standard had allowed.* As you can see in the chart below (via Brad DeLong) from Barry Eichengreen's magisterial work on the depression, Golden Fetters, recovery followed devaluation everywhere. There was no reward for financial orthodoxy in the 1930s. The countries that stayed with the gold standard the longest, the so-called Gold Bloc of France, Belgium, and Poland, were the last to begin growing again. In other words, the currency war didn't deepen the depression; it ended it.
And that brings us to one last, stupid question. How did beggar-thy-neighbor policies kickstart growth even after world trade had already collapsed? In other words, how did stealing a trade advantage help so much when there wasn't much trade to steal? Well, it's not entirely, or even mostly, about stealing trade. Indeed, as Scott Sumner points out, the U.S. trade balance actually worsened in 1933 after FDR took us off gold, even as the economy quickly reversed its death-spiral and began a virtuous cycle. It's easiest to frame devaluation as grabbing demand from abroad, but it's really about increasing demand at home. Devaluation means printing money, and more money during a liquidity trap means more demand, period. It also allows more stimulus spending than a fixed-exchange rate system (like the gold standard) would. The next time you hear someone lamenting the "destructive devaluations that followed the Great Depression," remember to ask them -- what was so destructive about ending the most destructive depression in modern history?
The only thing we have to fear is fear of currency wars itself. Depressions are the only casualties in these kind of conflicts.
* There were two exceptions. The gold standard did not constrain looser monetary policy in the U.S. and France in the early years of the depression, as both had more than enough gold to back more credit growth, but chose instead to sterilize their gold inflows out of fear of nonexistent inflation in the face of actual deflation. This stockpiling drained everybody else of gold, and consequently made staying on the gold standard impossible. Even the U.S. and France had to eventually abandon it to reverse years of deflation.
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
When Kenneth Jarecke photographed an Iraqi man burned alive, he thought it would change the way Americans saw the Gulf War. But the media wouldn’t run the picture.
The Iraqi soldier died attempting to pull himself up over the dashboard of his truck. The flames engulfed his vehicle and incinerated his body, turning him to dusty ash and blackened bone. In a photograph taken soon afterward, the soldier’s hand reaches out of the shattered windshield, which frames his face and chest. The colors and textures of his hand and shoulders look like those of the scorched and rusted metal around him. Fire has destroyed most of his features, leaving behind a skeletal face, fixed in a final rictus. He stares without eyes.
On February 28, 1991, Kenneth Jarecke stood in front of the charred man, parked amid the carbonized bodies of his fellow soldiers, and photographed him. At one point, before he died this dramatic mid-retreat death, the soldier had had a name. He’d fought in Saddam Hussein’s army and had a rank and an assignment and a unit. He might have been devoted to the dictator who sent him to occupy Kuwait and fight the Americans. Or he might have been an unlucky young man with no prospects, recruited off the streets of Baghdad.
In continuing to tinker with the universe she built eight years after it ended, J.K. Rowling might be falling into the same trap as Star Wars’s George Lucas.
September 1st, 2015 marked a curious footnote in Harry Potter marginalia: According to the series’s elaborate timeline, rarely referenced in the books themselves, it was the day James S. Potter, Harry’s eldest son, started school at Hogwarts. It’s not an event directly written about in the books, nor one of particular importance, but their creator, J.K. Rowling, dutifully took to Twitter to announce what amounts to footnote details: that James was sorted into House Gryffindor, just like his father, to the disappointment of Teddy Lupin, Harry’s godson, apparently a Hufflepuff.
It’s not earth-shattering information that Harry’s kid would end up in the same house his father was in, and the Harry Potter series’s insistence on sorting all of its characters into four broad personality quadrants largely based on their family names has always struggled to stand up to scrutiny. Still, Rowling’s tweet prompted much garment-rending among the books’ devoted fans. Can a tweet really amount to a piece of canonical information for a book? There isn’t much harm in Rowling providing these little embellishments years after her books were published, but even idle tinkering can be a dangerous path to take, with the obvious example being the insistent tweaks wrought by George Lucas on his Star Wars series.
ISIS did not merely blast apart old stones—it attacked the very foundations of pluralistic society.
If the ruined ruins of Palmyra could speak, they would marvel at our shock. After all, they have been sacked before. In their mute and shattered eloquence, they spoke for centuries not only about the cultures that built them but also about the cultures that destroyed them—about the fragility of civilization itself, even when it is incarnated in stone. No designation of sanctity, by God or by UNESCO, suffices to protect the past. The past is helpless. Instead these ruins, all ruins, have had the effect of lifting the past out of history and into time. They carry the spectator away from facts and toward reveries.
In the 18th century, after the publication in London of The Ruins of Palmyra, a pioneering volume of etchings by Robert Wood, who had traveled to the Syrian desert with the rather colorful James Dawkins, a fellow antiquarian and politician, the desolation of Palmyra became a recurring symbol for ephemerality and the vanity of all human endeavors. “It is the natural and common fate of cities,” Wood dryly remarked in one of the essays in his book, “to have their memory longer preserved than their ruins.” Wood’s beautiful and meticulous prints served as inspirations for paintings, and it was in response to one of those paintings that Diderot wrote some famous pages in his great Salons of 1767: “The ideas ruins evoke in me are grand. Everything comes to nothing, everything perishes, everything passes, only the world remains, only time endures. ... Wherever I cast my glance, the objects surrounding me announce death and compel my resignation to what awaits me. What is my ephemeral existence in comparison with that of a rock being worn down, of a valley being formed, of a forest that’s dying, of these deteriorating masses suspended above my head? I see the marble of tombs crumble into powder and I don’t want to die!”
Demonizing processed food may be dooming many to obesity and disease. Could embracing the drive-thru make us all healthier?
Late last year, in a small health-food eatery called Cafe Sprouts in Oberlin, Ohio, I had what may well have been the most wholesome beverage of my life. The friendly server patiently guided me to an apple-blueberry-kale-carrot smoothie-juice combination, which she spent the next several minutes preparing, mostly by shepherding farm-fresh produce into machinery. The result was tasty, but at 300 calories (by my rough calculation) in a 16-ounce cup, it was more than my diet could regularly absorb without consequences, nor was I about to make a habit of $9 shakes, healthy or not.
Inspired by the experience nonetheless, I tried again two months later at L.A.’s Real Food Daily, a popular vegan restaurant near Hollywood. I was initially wary of a low-calorie juice made almost entirely from green vegetables, but the server assured me it was a popular treat. I like to brag that I can eat anything, and I scarf down all sorts of raw vegetables like candy, but I could stomach only about a third of this oddly foamy, bitter concoction. It smelled like lawn clippings and tasted like liquid celery. It goes for $7.95, and I waited 10 minutes for it.
Heather Armstrong’s Dooce once drew millions of readers. Her blog’s semi-retirement speaks to the challenges of earning money as an individual blogger today.
The success story of Dooce.com was once blogger lore, told and re-told in playgroups and Meetups—anywhere hyper-verbal people with Wordpress accounts gathered. “It happened for that Dooce lady,” they would say. “It could happen for your blog, too.”
Dooce has its origin in the late 1990s, when a young lapsed Mormon named Heather Armstrong taught herself HTML code and moved to Los Angeles. She got a job in web design and began blogging about her life on her personal site, Dooce.com.
The site’s name evolved out of her friends’ AOL Instant-Messenger slang for dude, or its more incredulous cousin, "doooood!” About a year later, Armstrong was fired for writing about her co-workers on the site—an experience that, for a good portion of the ‘aughts, came known as “getting dooced.” She eloped with her now ex-husband, Jon, moved to Salt Lake City, and eventually started blogging full time again.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
It’s not just Trump: With Ben Carson and Carly Fiorina on the rise, Republicans are loving outsiders and shunning politicians.
For the first time in a long time, Donald Trump isn’t the most interesting story in the 2016 presidential race. That's partly because his dominance in the Republican polls, while still surprising, is no longer novel and increasingly well explored and explained, but it’s also partly because what’s going on with the rest of the GOP field is far more interesting.
Though it wasn’t pretty, Minaj was really teaching a lesson in civility.
Nicki Minaj didn’t, in the end, say much to Miley Cyrus at all. If you only read the comments that lit up the Internet at last night’s MTV Video Music Awards, you might think she was kidding, or got cut off, when she “called out” the former Disney star who was hosting: “And now, back to this bitch that had a lot to say about me the other day in the press. Miley, what’s good?”
To summarize: When Minaj’s “Anaconda” won the award for Best Hip-Hop Video, she took to the stage in a slow shuffle, shook her booty with presenter Rebel Wilson, and then gave an acceptance speech in which she switched vocal personas as amusingly as she does in her best raps—street-preacher-like when telling women “don’t you be out here depending on these little snotty-nosed boys”; sweetness and light when thanking her fans and pastor. Then a wave of nausea seemed to come over her, and she turned her gaze toward Cyrus. To me, the look on her face, not the words that she said, was the news of the night:
After a lackluster summer, the famous neurosurgeon is finally surging—but his reliance on the conservative grassroots might be a burden as much as a boon.
The Ben Carson surge that everyone was waiting for is finally here.
The conservative neurosurgeon has been a source of fascination for both the Republican grassroots and the media ever since he critiqued President Obama, who was seated only a few feet away, at the National Prayer Breakfast in 2013. He’s been a steady, if middling, presence in GOP primary polls for most of the year—always earning at least 5 percent, but rarely more than 10. Yet over the last two weeks, Carson has secured a second-place spot after Donald Trump, both nationally and in the crucial opening battleground of Iowa, where he is a favorite of the state’s sizable evangelical community. A Monmouth University poll released this week even showed him tied with Trump for the lead in Iowa, at 23 percent.