There is no phrase in foreign policy as simultaneously compelling and suggestive of a goal beyond reach as never again. These words, which allude to the Holocaust, urge action in the face of atrocities. But they are most often honored in the breach.
Consider the recent record. At the end of February, the UN Security Council dithered for days over an ineffective ceasefire resolution while troops under the command of Bashar al-Assad, the president of Syria, murdered hundreds of civilians in Eastern Ghouta. Indeed, for seven years, the Assad regime and others have slaughtered civilians with impunity in a civil war that has claimed the lives of more than 500,000 people. Meanwhile, Saudi Arabia’s indiscriminate bombing campaign in Yemen has created a humanitarian crisis. Burma’s ethnic cleansing campaign has turned over 650,000 Rohingya into refugees. Brutal ethnic and political violence has claimed tens of thousands of innocent lives in South Sudan. This is not a record that inspires confidence in never again.
Yet one does not have to reach too far back to find a moment when prospects for stopping atrocities looked brighter. In late 2008, as Barack Obama prepared to assume the presidency, a bipartisan task force led by Madeleine Albright, the former secretary of state, and William Cohen, the former secretary of defense, published a self-styled “blueprint” for prevention. Ending genocide and other mass atrocities, they postulated, was an “achievable goal.”
Or at least it could be. The report didn’t pretend to have all the answers. Messy hypotheticals, like whether the United States would proceed with military intervention on humanitarian grounds without UN Security Council authorization, as it did in Kosovo in 1999, went unaddressed. The report also didn’t answer whether America should take sides in civil wars where atrocities were being committed. The point of the blueprint, though, wasn’t to provide every answer. It was a call to action premised on the notion that with sufficient confidence, commitment, and help from like-minded friends, the United States could create a genocide-free world.
That vision found a receptive audience in Obama’s foreign-policy team, which included Samantha Power, who had written a Pulitzer Prize-winning book on U.S. inaction in the face of genocide. Her ideas heavily influenced the Albright-Cohen report, and she had high-level ties across the young administration.
As I recounted in a recent report for the Holocaust Museum, these ideas were soon put into action, and in a big way. Obama used part of his 2009 Nobel Prize acceptance speech to argue for the use of force, in exceptional circumstances, to stop mass atrocities. The National Security Council created a new position focused on preventing atrocities (a job I held before taking over for Power when she became ambassador to the UN). Most prominently, Obama issued a presidential directive in August 2011 that declared the prevention of genocide and other mass atrocities to be a “core national security interest” and a “core moral responsibility” of the United States. It also ordered the creation of an “Atrocities Prevention Board” of officials from across the government to oversee prevention policy.
Why did the Obama administration go so far out on a limb for a policy that was so ambitious and untested? In part, it was because these moves were designed to generate pressure on the administration: the more it publicized its commitments, the more difficult it would be to back away from them. But it was also the case that, back in August 2011, one could almost believe that the blueprint was working.
In 2010 and 2011, the United States helped lead successful multilateral initiatives to stave off mass violence during South Sudan’s independence referendum and Ivory Coast’s succession crisis. Invoking the “responsibility to protect,” a concept that the entire UN membership endorsed in 2005, the Obama administration gained authorization to mount a military intervention in Libya to defend civilians threatened by Muammar Qaddafi. The United States was leading, and the Council was following; the international order was working to prevent mass atrocities.
But at that very moment, the international movement to end mass atrocities was running into big trouble. Libya was at the center of the story. While western governments had initially suggested that the military coalition was not pursuing regime change, they pivoted mid-course to arguing that it was “impossible to imagine a future for Libya with Qaddafi in power.” As Russian officials made clear, this was not a precedent that Moscow was prepared to accept.
They made this clear just as Syria started to burn, and before the U.S. fully appreciated the significance of the change. As the Assad regime cracked down on peaceful anti-government protests, Obama declared that the time had come “for President Assad to step aside.” But backed by Iran and Hezbollah, Assad had far more staying power than the United States assumed. And while U.S. policymakers might have hoped that vivid images of Assad’s brutality would pressure Vladimir Putin, the president of Russia, into supporting a meaningful response from the Security Council, Moscow would not be shamed.
Thus, by the time Obama introduced his Atrocities Prevention Board in April 2012, his administration’s biggest successes in atrocity prevention had already been achieved, and the seeds of its most prominent failures had been planted. The U.S. government had helped break the Libyan state without knowing how to rebuild it (Obama has called this his “worst mistake”), and it was watching the Syrian state break the Syrian people without knowing how to make it stop.
These intractable challenges were not for the new Board to resolve, however: For the most part, it did not deal with crises like Syria, Afghanistan, and Sudan that were already receiving senior-level attention. Instead, the Board’s job would be to scan the horizon for places where the risk of mass violence was on the rise—Burundi, Burma’s Rakhine State, and the Democratic Republic of the Congo (DRC), for instance—and to jump-start policy discussions among the administration’s senior staff for situations where the United States had little or no policy.
Shaped by the Board, Obama’s second-term atrocity-prevention efforts yielded some benefits. Working together with partners, U.S. diplomacy, assistance, and economic statecraft helped keep Burundi and the DRC from exploding into violence over election disputes. Similar efforts supported French and UN peacekeepers who surged into the Central African Republic as it threatened to fracture. Pressure on Burmese central authorities probably helped contain violence for several years. But none of those efforts should prompt a victory lap. To varying degrees, all of the countries concerned remain wracked by violence and instability. Some are now worse off.
With all this in mind: Is it time to give up on never again?
I would say no. Beyond the moral imperative for trying to end the world’s worst crimes, it is hard to imagine a time when it will stop being true that, as Obama noted in 2011, America’s “security is affected when masses of civilians are slaughtered, refugees flow across borders, and murderers wreak havoc on regional stability and livelihoods.” The misery and dislocation created by the crisis in Syria, and the impact it has had on the political foundations of the West, could hardly illustrate this point better. Yet we have also seen that a hard focus on early warning and preventive statecraft, while worth maintaining—to its credit, the National Security Council under President Donald Trump has tried to sustain the Board for this purpose—will not always be enough. So what should the next phase of U.S. atrocity prevention policy look like—whether under Trump or a future administration?
First, if the U.S. government is going to be in the business of atrocity prevention, then both Washington and its critics will need to value the sorts of outcomes that the United States helped produce in places like Burundi: frozen crises where the best that can be said is that catastrophic violence has, for now, been averted. Such outcomes are not the stuff of inspiration. But they can be first steps.
Pragmatism is also required. It’s important to accept that, sometimes, there’s a tension between the objectives of peace, justice, and democratic transition—especially in the early stages of conflict prevention and resolution. And sometimes, it really is important to start with peace. Insisting that Assad must go made de-escalation in Syria harder. Placing Qaddafi under the jurisdiction of the International Criminal Court complicated efforts to create a smooth exit for him.
Most importantly, the U.S. government has to confront the current reality at the UN Security Council. Gone are the days of what commentator Richard Gowan has called the “CNN effect,” when on-the-ground evidence of mass atrocities might have prodded the Council to take meaningful action. Russia’s dismissive reaction to the images flooding in from Eastern Ghouta is just the latest example.
Certainly, there is room for strengthening measures that can clearly be taken without the Council’s approval, things like targeted sanctions. Because of America’s economic power, its sanctions can bite, even when imposed unilaterally. Of course, they’re yet more effective when the European Union acts in parallel. Travel bans against abusive leaders with ties to the United States and Europe can also provide leverage.
But the tougher questions concern the use of force. Since 1945, most international lawyers have argued that Security Council approval is a necessary prerequisite for using force to stop governments from committing mass atrocities against their own people. There are, however, important exceptions. The governments of Britain and Denmark hold that international law permits humanitarian intervention under extraordinary circumstances, while the United States has straddled the issue. While the United States has used or threatened force for purposes of humanitarian intervention in northern Iraq, Kosovo, and Syria (on two occasions) since 1990, it has not justified its actions under international law. It has long feared that it might establish a new norm that could be abused, or foster expectations that its military would act as the world’s anti-atrocity policeman. Moreover, Libya demonstrated that even when the UN authorizes a military intervention, it can go badly. Bombs and guns are blunt instruments that can always make things worse.
But while these concerns are legitimate, there remain circumstances when the threat or use of military force can be both necessary and effective. Obama’s credible threat of force against Syria in 2013 led Russia to make a deal for the removal of chemical weapons from the country and the only legally-binding Security Council resolution of the conflict. And even international lawyers opposed to humanitarian intervention tend not to argue that the world was right to be passive in the face of the Rwandan genocide. Sometimes you just break the rules, is how some have explained their approach to me.
But that approach—which is, effectively, America’s—shows little respect for international law and manages to inhibit effective planning (nobody knows what is permitted) while suggesting limitless freedom of action (nobody knows what isn’t). Recognizing these problems, former State Department Legal Adviser Harold Koh has proposed enlisting eminent international lawyers to help the United States catch up with Britain and Denmark by articulating a rule allowing for humanitarian intervention that would be consistent with international law. Because this rule would apply universally, it would need to be tight enough to inhibit abuse by all nations.
Will these steps bring us within immediate reach of never again? Of course not, but they will help regain lost ground. If the 10 years elapsed since the Albright-Cohen blueprint have revealed anything, it is that the United States ignores the risks of atrocities at its peril, and that its tools for meeting that challenge are both insufficient and diminishing. It’s time to get back on track.