“Our way of taking power and using it would have been inconceivable without the radio and the airplane,” Nazi Propaganda Minister Joseph Goebbels claimed in August 1933.
Such statements are often cited—the head of Disney, Bob Iger, recently said that Adolf Hitler would have loved social media—but frequently misinterpreted. Goebbels was not saying that the Nazis had used both new technologies, the airplane and the radio, to come to power. Rather, the airplane helped the Nazis take power. Radio helped them keep it.
The history of radio, and in particular how it was regulated in interwar Germany, is more relevant than ever: Five years ago, the question was whether we would regulate social media. Now the questions are how and when we will regulate them. As politicians and regulators in places as disparate as Berlin, Singapore, and Washington—even Facebook founder Mark Zuckerberg—consider how best to do so, we should think carefully about the fallout from well-intentioned new rules and avoid the mistakes of the past.
Airplanes played a vital role in Nazi election strategy in the last years of the democratic Weimar Republic. When Hitler campaigned to be president in 1932, he flew to multiple locations a day to give speeches to roaring crowds. Arriving by plane, Hitler strode on the tarmac like the epitome of a strong leader. Although he lost that campaign, he still capitalized on German admiration for aviation to make the Nazis appear exciting and modern.
Radio was different. Radio only became central to Nazi aims after Hitler was elected chancellor in January 1933, but Goebbels quickly exercised power over the medium, because the state already controlled its infrastructure and content. State control over radio had been intended to defend democracy. It unintentionally laid the groundwork for the Nazi propaganda machine.
Radio emerged as a new technology in the early 1920s, and the bureaucrat tasked with developing regulations for it in the Weimar Republic, Hans Bredow, initially had high hopes. He thought that radio could broadcast education and entertainment to bring the German population together after the divisive loss of World War I, and believed that radio should not broadcast political content, fearing it might exacerbate an already febrile environment.
Initially, Bredow allowed private companies to broadcast, and only from the mid-1920s on did stations start to air some news. This seemed dangerous to Bredow and other officials, who worried that news could stoke uprisings or antidemocratic sentiment.
Weimar bureaucrats began exerting ever greater state supervision over radio content to try to depoliticize it. As the Weimar Republic became more and more politically unstable, Bredow and others pushed through reforms in 1926 and 1932 that mandated direct state supervision of radio content. Bredow believed that increased state direction would prevent Weimar democracy from failing.
Ironically, this effort played right into the Nazis’ hands, and meant that the Nazis could seize immediate control over radio content when they came to power. Bredow was imprisoned for trying to stand up for democratic values. (After World War II, he helped to reestablish radio in democratic West Germany. There is now even a media institute in Hamburg named after him.)
The Nazi example, though extreme, reminds us that well-intentioned laws can have tragic unintended consequences. Singapore, for example, has passed the Protection From Online Falsehoods and Manipulation bill, allowing the country’s government to require platforms and private chat apps such as WhatsApp or Telegram to remove what the authorities see as false statements “against the public interest.” The law also enables officials to prosecute people who spread those false statements, although the law does not define what it means by a “false statement.” The deputy director of Human Rights Watch’s Asia division told the BBC that the law was “a direct threat to freedom of expression and is something the entire world should be alarmed about.”
German politicians drew their own lessons from history to try to protect democracy. In 2017, Germany passed the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, or NetzDG). A mouthful of a compound noun, the law requires social-media companies with more than 2 million unique users in Germany to remove within 24 hours flagged posts that violate any of 22 different statutes of German speech law online. The statutes range from “incitement to hatred” and “distribution of child pornography” to blasphemy. Any violation would draw a fine of up to 50 million euros ($56 million) per post.
Known colloquially as a “hate speech law,” NetzDG was arguably the first and most wide-ranging effort by a democracy to hold social-media companies responsible for speech on their platforms. One poll showed that 87 percent of Germans agreed with the law, but it drew sharp criticism from journalists, civil-society activists, academics, and the tech industry. Many signed a declaration that the law “jeopardizes the core principles of free expression.”
The law illustrated a deeper disagreement on the role of free speech in a democracy. Some West German politicians across the spectrum had in an earlier era argued for a “militant democracy” (wehrhafte Demokratie), where rights such as free speech could be curbed to guard broader democratic norms. During the creation of NetzDG, then Justice Minister (and current Foreign Minister) Heiko Maas built on the tradition of militant democracy to assert that “freedom of speech has boundaries.”
At the same time, many worried that the law provoked the Streisand effect: the idea that censoring or removing information actually publicizes it. Just after NetzDG came into force in January 2018, a post by Beatrix von Storch, a prominent politician from the far-right Alternative for Germany party (AfD), was removed from Twitter and Facebook. The media devoted much attention to the incident, including the post’s content. The Streisand effect morphed into what one journalist dubbed “the Storch effect.” The AfD has marshaled NetzDG as part of a broader argument that its voice and opinions are being silenced—a similar argument to one made by Donald Trump regarding his supporters being banned from Twitter and Facebook. Some have even revived Nazi terminology and decried the media as Systempresse (or “system press”), colluding to silence incidents of refugee violence and other purported problems with immigrants in Germany.
Many other models of regulation are on the table. The United Kingdom is suggesting an approach that puts the onus of “duty of care” on social-media companies to prevent online harms. France has proposed a regulator that requires accountability and transparency by design. Some are suggesting social-media councils that might look like the older model of press and broadcast regulation, while others hope to reform the social-media ecosystem through antitrust legislation or data privacy.
This week, I will be testifying in Ottawa before a coalition of 11 countries working on this issue, ranging from the U.K., Canada, and Estonia to Argentina and Chile. Regulation is never simple. Nor is protecting the press at a time when journalists are under threat. But history can help us avoid the worst pitfalls.
We need to be wary of the long-term consequences of state control over content. The online world of social media has many problems and far more neo-Nazis than we might wish. Action is needed. But the actual history of Weimar and Nazi Germany can help us think more critically about current policy suggestions and move beyond mud-slinging comparisons with the fascist past.
It is time for politicians to take the regulation of social media seriously. In the long run, however, they must be careful not to undermine the freedoms and the political system that they seek to protect.