In the Beginning, there was the Bomb. Humankind learned how to split atoms, and then we learned how to contain those splitting atoms just long enough to make them explode. And then the United States dropped two bombs on Japan.
The bombs of 1945 represented the advent of a new age, in which nuclear weapons would lurk behind even the smallest conflicts. But they also brought to an end centuries of assumptions about war; as Bernard Brodie wrote a decade after Hiroshima and Nagasaki, the sheer power of nuclear arms meant “the end of strategy as we have known it,” because of the inability to match any political goal to the devastation of a nuclear war.
This new age also created a new priesthood of nuclear experts and strategists, people who dealt every day with the arcane and the unthinkable. (For a brief time, I was one of them.) These experts advised the policy makers who would have to make terrifying decisions; their terms and concepts—assured destruction, first strike, secure second-strike capability—would, especially during moments of crisis, make their way into the public mind.
When the Cold War ended, we collectively decided to stop thinking about things like nuclear strategy. So did governments; as Michael Mullen, then the chairman of the Joint Chiefs, said in 2010: “We don’t have anybody in our military that does that anymore,” because we thought we no longer faced such Cold War dilemmas. “We were wrong,” Mullen lamented.
And so I’m offering a quick primer on a few key nuclear concepts. Please note that I am not predicting anything. Rather, I am hoping to reacquaint laypeople with things that I too, in my optimism, had hoped we could forget.
Strategic, theater, and tactical nuclear weapons
States usually categorize nuclear weapons, especially in arms-control agreements, by the distances they travel and their intended uses.
Strategic nuclear weapons are meant to travel long distances—by treaty, we once agreed with the Soviets and the Russians that long meant more than 5,500 kilometers—and to strike targets of “strategic” importance: enemy nuclear forces, leadership, and even cities and infrastructure.
Theater nuclear weapons are meant to be used in a “theater,” such as Europe or Asia, as a means of affecting the outcome of a war in that region. Targets in this category would include assets such as airbases, regional command centers, and, in some cases, even cities. Theater-range weapons were seen by both the U.S. and the U.S.S.R. as highly destabilizing because they could provide the fateful bridge between a regional nuclear conflict and all-out nuclear war, and were banned by both parties in the 1987 Intermediate-Range Nuclear Forces (INF) Treaty. The Trump administration exited that treaty in 2019.
Tactical nuclear weapons are also called “battlefield” nuclear weapons. They are smaller nuclear arms—but remember, this is “small” in the context of “a small nuclear weapon”—meant to affect the course of a particular battle. Such weapons (defined by the now-defunct INF treaty as those that cannot travel more than 500 kilometers) might be aimed at tank formations, for example, to blunt a massive attack.
Over the past 70 years, nuclear strategy has revolved around the central question of what nuclear weapons actually do as instruments of state power, and whether they serve a purpose beyond deterring the use of nuclear weapons themselves. Can they be used to fight wars? Or do they merely exist to keep regional wars from becoming catastrophic global nuclear wars?
The first attempt to square this nuclear circle came in the mid-1950s, when the United States faced the frustrating reality that a decisive edge in nuclear weapons wasn’t helping much in the struggle with Soviet Communism. Our nuclear advantage hadn’t stopped the Berlin blockade, the Greek civil war, the fall of China, the war in Korea, or other near-death experiences for democracies after World War II.
In 1954, Secretary of State John Foster Dulles came up with the idea of threatening to use America’s “great capacity to retaliate, instantly, by means and at places of our choosing,” to deter Communist aggression. Instead of facing down the Communists in every theater in the world, we would “massively retaliate” with our nuclear arsenal for any Soviet misbehavior.
This, of course, was impossible. Although we were committed to our own defense and that of our allies, we weren’t seriously going to respond to any provocation with the nuclear destruction of Leningrad or Vladivostok.
No one liked massive retaliation. Even President Dwight Eisenhower backed away from Dulles in the face of serious public criticism. Ultimately, massive retaliation wasn’t a strategy so much as it was a sign of desperation: We have these amazing weapons, and yet they don’t do anything.
So the problem remained: How could the United States credibly defend its allies against superior Soviet (and, in Asia, Chinese) forces? The answer was the concept of extended deterrence. The Soviets knew that to attack the American heartland with nuclear arms would mean utter devastation. Under extended deterrence, the Americans would treat our allies as indistinguishable from ourselves, and we would defend Paris or Amsterdam as we would New York or Chicago. (You might see references to the “nuclear umbrella” as a description of how the U.S. arsenal protects friends beyond North America.)
But what if the Soviets went ahead and marched into Europe instead of launching nuclear weapons? A madman’s threat to start blowing up Soviet cities in response—that is, to engage in massive retaliation—was ghastly and immoral. And from the point of view of deterrence, it was even worse: It wasn’t credible.
The answer to this dilemma in the 1960s was a NATO policy—one still in effect—called flexible response. During the Cold War, NATO was outgunned. It could not win a major conventional war in Europe against the U.S.S.R. Instead, the U.S. and NATO promised high risks of escalation. If you invade us, we told the Soviets, we’ll hold you off as long as we can with any number of conventional options. But we reserve the right to escalate the conflict—and even to use nuclear weapons first, if that’s what it takes to save ourselves and our allies.
If NATO, for example, were to face gigantic columns of armor, we reserved the right to strike that armor with tactical nuclear arms. If Soviet echelons were massing in rear areas, perhaps near the U.S.S.R. itself, we reserved the right to strike those echelons, even if it meant a wider war. And if the Soviets threatened retaliation by going to theater or strategic nuclear war—so be it, but NATO made clear that the alliance was ready to respond in kind.
This strategy did not require the U.S. or NATO to be run by lunatics. It was, and remains, a threat to drag out a war so long and at such a price that the situation becomes unstable and thus far more dangerous to Moscow, whose “allies” during the Cold War hated the U.S.S.R. and whose entire war plan for any conflict in Europe was to conquer quickly and without the risk of either internal political opposition or a nuclear exchange.
Flexible response was, in effect, a warning that no Soviet military leader could promise a quick and nonnuclear victory in Europe.
(The Russians, by the way, have now adopted something like their own “flexible response,” reserving the right to use nuclear weapons to “de-escalate” situations that threaten them. The difference is that NATO’s policy has always been to keep nuclear options in self-defense; Russia’s policy is, to say the least, less clear.)
No First Use
Flexible response and extended deterrence explain why the United States has refused to promise never to be the first to use nuclear weapons. A strategy that relies on the threat of escalation cannot also close off that option.
Other nations have made such pledges, but remember that a pledge of no first use is only that: a pledge. No mechanism can guarantee that nations will abide by such a promise. There are good arguments both for and against declaring a policy of “no first use.” (I am in favor of such a declaration.) But bear in mind that no one can enforce such a policy.
The “triad” and Mutual Assured Destruction
Until about 1960, the superpowers relied on relatively small nuclear arsenals whose weapons would have to be delivered by bombers. A nuclear war, U.S. and Soviet planners reasoned, would be terrible but survivable, and perhaps even “winnable” in the sense that one side would have to capitulate after enduring enough damage.
The 1960s brought nuclear arms into the missile age, and bombs that once would been delivered by loading them on aircraft for hours of risky flight over hostile territory—think of the B-52 crew in Dr. Strangelove—would now reach their targets in minutes. Bombers and submarines and intercontinental ballistic missiles (ICBMs) would form a triad that could survive a first strike from the enemy and then hit back. This ability is now commonly called a “secure second-strike capability.”
These technological advances were wedded to massive stocks of nuclear warheads, and soon even the most bellicose hawk could do the math: A full nuclear exchange meant the complete destruction of both sides (and most of the world). Even the most elegant war plans would result in millions killed instantly, and billions more dying later from radiation or famine. There would be no “winner” in such a war. The enemy’s obliteration and ours was inevitable: mutual assured destruction, or MAD.
This reality led the Americans in the late 1960s to propose MAD to the Soviets as a policy. Let us recognize, we said to the Kremlin, that there is no counting on victory here. That means we, and you, will not try to develop defenses, a position the Americans held until President Ronald Reagan initiated a strategic-defense program in 1983. We will not make provocative investments in civil defense. We will proceed in every conflict between us—conflicts that are inevitable in our competition—with the duty to do everything possible to avoid a global nuclear war.
The Soviets, at first, wanted no part of this suicide pact. They believed in defenses, and claimed that even after a nuclear war, the superior nature of Soviet social and economic organization—no, really, they said this—would allow them to recover first; they would one day preside over the final burial of capitalism.
In reality, however, Soviet leaders knew that an all-out nuclear war was unwinnable. (Indeed, one of the reasons they split with their Chinese comrades in the 1960s was because the Chinese thought the Soviets were too fainthearted about nuclear war.) They put on a brave face, but they were no more eager for a nuclear showdown than we were.
By the time President Richard Nixon negotiated the first major arms treaties with the U.S.S.R., in 1972, the Kremlin knew that MAD was a fact, whether they liked it or not. And in 1985, Reagan and the Soviet leader Mikhail Gorbachev jointly declared: “A nuclear war cannot be won and must never be fought.”
This is still the position, at least in public, of both governments.
Launch on Warning and Ride Out
One of the most dangerous and destabilizing things about nuclear weapons is that they are inherently offensive; that is, they need to get out of their silos or off the runway before the incoming strike lands. This fear of a nuclear ambush is why some weapons to this day are kept on alert and ready to launch on warning, a posture in which nuclear missiles and bombers can escape destruction by heading to their targets the moment an enemy launch is detected. (Nuclear-missile submarines can afford to wait, as they hide in the depths of the ocean and hope they have not been spotted by the enemy ships that will be hunting them down.)
The problem is that a policy of launching on warning leaves almost no time for a decision. Strategic weapons from Russia or China would land on U.S. targets in less than half an hour; missiles launched from submarines off American shores would get here even sooner. The president would have to make a decision to retaliate with less than 10 minutes of warning, if that.
The chances of a mistake are high and the consequences of such an error incalculable. Both the Soviets and the Americans over the years were faced with false alarms. In one case in 1983, war was averted when a Soviet air-defense officer refused to believe an attack warning. (It was indeed erroneous.) In another case, in 1979, National Security Adviser Zbigniew Brzezinski was roused from bed in the middle of the night and told that a massive Soviet strike was incoming; he was about to wake up President Jimmy Carter when NORAD officials realized that they were looking at a training tape instead of a war.
The alternative to launching on warning is not attractive, however: It involves waiting for nuclear weapons to land on American targets as final confirmation of an actual attack. We would ride out the storm of a first strike, assess, and then strike back. No one is keen to take the first nuclear punch, and so this has never gotten very far as a proposal.
Sadly, everything old is new again. As anxiety-inducing as all of these terms can be, they’re even more unsettling when their meaning isn’t clear. Although obsessing over nuclear conflict is unhealthy and pointless, Americans should be able to understand these ideas and expressions when they appear in our public discourse, as they unfortunately will again.