EU bureaucrats maintain that the Digital Services Act is not a censorship regime, but is meant to save people from misinformation by deleting it from the internet or hiding it from view. Which, in fact, is the very definition of censorship. Welcome to the Cardassian Union.

I tried to record two episodes this week to make up for the fact that I won’t have the time to record one in the coming week. Sadly, my workload and the amount of research that went into this episode made that impossible. I’ll be back with another episode as soon as I can.

Something Right out of Orwell

The EU’s Digital Services Act (EU Regulation 2022/2065) has recently come into effect. It requires internet platforms to remove what the EU calls “illegal content” proactively or immidately after it has been flagged, or face dire financial consequences (up to being outright banned within EU member states). This is, in fact, the largest and most far-reaching online censorship program ever implemented. And make no mistake, this is, without doubt, censorship.

Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or “inconvenient”. Censorship can be conducted by governments, private institutions and other controlling bodies.

The European Conservative called it the EU’s Orwellian internet censorship regime.

Among many other things, the DSA obliges large online platforms to swiftly take down illegal content, hate speech, and so-called disinformation – aiming, in the words of European Commission president Ursula von der Leyen, to “ensure that the online environment remains a safe space.” Very large online platforms (VLOPs) with more than 45 million monthly active users must abide by the rules from Friday; smaller platforms have until February to comply. Designated by the Commission back in April, the 19 VLOPs include all the big names—Google, Facebook, Instagram, Twitter / X, YouTube and Amazon—as well as smaller fries like Wikipedia, LinkedIn, and Snapchat.

If VLOPs fail to comply with these dictates, they can be fined up to 6% of their annual global revenue. Or they can be subjected to an investigation by the Commission and potentially even be prevented from operating in the EU altogether.

To achive this, the EU has created what Orwell would have called Minitrue:

VLOPs will fund a permanent European Commission taskforce on disinformation of some 230 staff, paying an annual ‘supervisory fee’ of up to 0.05% of their revenue.

Smaller platforms are to be regulated by individual EU member states, who must establish national Digital Services Coordinators by February.

Note that, in keeping with recent developments in the censorship-industrial complex, this isn’t the old model of the government passing laws and then enforcing them; this is the new model, where government collaborates with mega-corporations to enforce the law.

Of course, the EU maintains that this isn’t censorship. Because only “illegal” content is to be removed. But who determines what is illegal content? More on this later.

It isn’t actually that important, anyway, because, in reality, this is not true. Because the rules that only “illegal” content will be removed can be swept aside at a moment’s notice:

On top of this day-to-day censorship, the DSA also has a built in “crisis-management mechanism,” whereby in times of “extraordinary crisis,” the Commission can immediately oblige platforms to remove content. A “crisis” is defined as “an objective risk of serious prejudice to public security or public health in the Union or significant parts thereof.”

Whether this standard has been met is determined not by an independent body, or even by the toothless European Parliament, but by the Commission itself.

With other words: The body who benefits from the most power when declaring a crisis can just unilaterally do that.

Now, a few years ago, one might have been inclined to find it somewhat reassuring that a crisis can’t simply be manufactured. But after having witnessed the world’s reaction to the SARS-CoV-2 pandemic, we now know how easy it is to manufacture the appearance of a crisis. One just has to get the press to be afraid and everything else falls into place. We could have known this earlier, of course. It has happened many times before. The WMD episode was a prominent earlier example.

So what kind of speech is the DSA expected to police? Last year’s Strengthened Code of Practice on Disinformation defines disinformation as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.” The code has already been put to work during elections and to “respond to crises,” such as COVID and the war in Ukraine.

I would really love for one of these EU bureaucrats to explain to me how a war that isn’t being fought on the territory of the EU, nor includes any of the EU member states, is applicable to be policed by a law that is applicable to EU citizens. What is the legal justification of this? Does a crisis in China also allow the EU to suddenly police my speech as an EU citizen? How is that supposed to protect the public in the EU? Shouldn’t a crisis actually involve the EU to be applicable here?

Of course, what is actually going on here, is that the EU wants to enable the people in power in its bureaucratic apparatus – and those of its member states – to silence any information that is inconvenient to them.

The European Digital Media Observatory (EDMO), an EU-funded fact-checking hub which aims to “identify disinformation, uproot its sources or dilute its impact.” This downright sinister organisation, which naturally claims to be “independent” and “impartial,” is essentially the EU’s answer to Big Brother. Launched by the Commission in June 2020 with a budget of €13.5 million, it compiles reports on internet discourse across the EU. These include regular “fact-checking briefs,” “disinformation reports” for specific countries, and “early warnings” on predicted disinformation trends, the better to “prebunk” them.

“Prebunking,” one EDMO presentation explains, is “the process of exposing lies … before they strike.”

They aren’t simply fighting thoughtcrime. They are fighting thought-precrime. And ingenious way to combine the worst predictions of Orwell and Philip K. Dick and make them become reality!

Clearly, what is common to such narratives is not that they represent ‘disinformation’ – that is, “false information intended to mislead.” Rather, these are the expression of political opinions dissenting against the EU establishment. They represent opposition by the European public to unpopular policies favoured by European elites – in this case, mass migration, transgender ideology, and Net Zero eco-austerity. This startling document reveals how the technocratic crusade against so-called disinformation is in fact nakedly political and anti-democratic. What is labelled ‘disinformation’ is really just any political narrative that the globalist EU establishment dislikes (indeed, even the term “globalists” is branded as wrongthink).

They can also ban websites, by the way. Which has already been threatened by the French government.

And the worst thing about all of this? It started after the Islamic terrorist attacks of Paris and Brussels. Just as has happened with the West’s reaction to 9/11, the terrorists have won in the end. They are getting us to destroy our own core values out of fear.

After terrorist attacks in Paris and Brussels in 2015, European regulators threatened the platforms with extensive regulations unless the platforms undertook meaningful measures to effectively police and remove hate speech and extremist speech. Faced with this prospect of regulation, four major platforms – Facebook, Microsoft, Twitter, and YouTube – entered into a voluntary agreement with the EU.

The Brussels Effect

This new law will not only effect EU citizens. In reality, it will be enforced all over the world, thanks to something that has been termed the “Brussels Effect”.

The Chicago Journal of International Law, in the paper The Digital Services Act and the Brussels Effect on Platform Content Moderation explains this effect as follows:

The DSA, like other recent EU regulations of social media platforms, will further instantiate the Brussels Effect, whereby European regulators will continue to strongly influence how social media platforms globally moderate content and will incentivize the platforms to moderate much more (allegedly) harmful content than they have in the past. This extensive regulatory regime will incentivize the platforms to skew their global content moderation policies toward the EU’s (instead of the U.S.’s) approach to balancing the costs and benefits of free speech – especially given the DSA’s huge financial penalties for violating its provisions.

The CJIL author predicts that the Digital Services Act will create immense friction with current US laws when it comes to free speech. One big factor is that the DSA does a complete one-eighty on established jurisprudence where indemnity of online platforms is concerned.

Up until the DSA, the EU had similar free harbour provisions to Section 230 USC, meaning internet platforms where not liable for the content their users put up, provided that they weren’t editorialising this content (as a publisher would, who is liable). But now, the DSA mandates that platform providers are liable if they don’t editorialise the content in question. Since in the US, Section 230 is still on the books, providers will be entering an immense minefield of possible lawsuits that has been sown in the legal landscape between these two contradicting laws.

The DSA generally provides that platforms are not liable for the third-party content they host, provided they act expeditiously upon notice of such allegedly illegal content.

The DSA contemplates a regime in which individuals, country-level authorities, and “trusted flaggers” – which are private, non-governmental entities or public entities with expertise of some type – can identify content that they believe to be illegal under EU country-specific laws.

The “Notice and Action” provisions of the DSA stand in sharp contrast to the comparable U.S. regime under § 230(c) of the Communications Decency Act (CDA). Section 230(c), which is the main piece of legislation applicable to general platform liability in the U.S., immunizes platforms from many forms of liability for hosting third party content. In contrast to the DSA, § 230(c) imposes no conditions on platforms to receive immunity from liability. Since the CDA’s passage in 1996, § 230(c) has been consistently interpreted by U.S. courts to provide broad immunity to platforms for hosting and facilitating a wide range of illegal content – from defamatory speech to hate speech to terrorist and extremist content. Notice of illegal content is irrelevant to such immunity. Thus, even if a platform like YouTube is repeatedly and clearly notified that it is hosting harmful content (such as ISIS propaganda videos), the platform remains immune from liability for hosting such harmful content.

The other elephant in the room is, of course, the First Amendment to the US Constitution.

Many EU countries have speech regimes under which speech is deemed illegal according to widely different (and compared to the U.S., vastly less protective) standards. Several categories of speech are illegal under European law but would be protected in the U.S. under the First Amendment – some for better, some for worse. For example, several EU countries restrict Holocaust denial and minimization as well as glorification of Nazi ideology. In Germany and other EU member states, Holocaust denial and glorification are illegal.

Yet, illegal content in European countries also includes categories of content that would be deemed valuable and are protected under the U.S. free speech regime These include French laws prohibiting criticism and parody of the president, such as by depicting him as Hitler (which was recently held to violate French insult and public defamation laws); Austrian and Finnish laws that criminalize blasphemy; Hungarian laws that prohibit a range of pro-LGBTQ+ content accessible to minors.

The DSA’s Notice and Action regime, which allows entities in the EU to flag content that is illegal under their country’s laws and requires the platforms to expeditiously remove such content, will likely incentivize platforms to remove a vast amount of content that would be deemed protected – and indeed valuable – under other countries’ speech laws, including the U.S.’s, such as political criticism, satire, parody, and pro-LGBTQ+ content.

The DSA will also clash with recent legislation of the books in Texas (the DISCOURSE Act) and the proposed CASE-IT federal law.

What Is “Illegal Content” Anyway?

There is, I feel, a major issue with the fundamental approach the EU has taken with this law that is woefully under-reported. Where this law is concerned, proponents of the idea keep talking about “illegal content”. The law itself stipulates that “trusted flaggers” can report content to providers that they think is illegal under laws in various EU member states:

upon obtaining such knowledge or awareness, [the provider] acts expeditiously to remove or to disable access to the illegal content

But what constitutes “illegal content” anyway?

In a democratic state, under the rule of law, there is but one institution that decides if an act – whether it be by hand, by mouth or by keyboard does not matter – is illegal: the courts. In our current understanding of justice, the police investigates acts that are thought to be illegal, state attorneys indict the suspect and the courts decide whether the act actually was violating the law. Any other process would jeopardise the rule of law and be actively hostile to a democratic society.

If the rule of law still holds, how can a “trusted flagger” report someone for “illegal content”? If we still adhere to the presumption of innocence, would not a court have to decide if the act of speech that produced that content was illegal in the first place? Either the content they are talking about is in fact not illegal at all, or they are trying to eliminate the presumption of innocence, a vital pillar of our justice system – and indeed a central justification for our mode of government – that goes back to the times of Ancient Rome.

What is this, the Cardassian Union?

Gul Dukat smiling

In most, if not all, EU countries, the police is legally required to investigate any online content that is thought to be illegal. Otherwise, how could the legal system function? Now, if providers remove content that’s actually illegal before anyone can inform the police, wouldn’t that be a substantial case of obstruction of justice (which itself is punishable by criminal law, in many cases)?

Did the people who drafted this legislation not think of this? It took me about a day of research and thinking about these issues to come up with this, and I’m not even a legal expert. Are these people really that dumb? Or are they just misguided? Or maybe corrupt? How the fuck can anything this insane actually be passed as law?

Annotated Documents:

  1. The Digital Services Act and the Brussels Effect on Platform Content Moderation, Chicago Journal of International Law
  2. The EU’s Orwellian Internet Censorship Regime, The European Conservative
  3. Civil society statement: Commissioner Breton needs to clarify comments about the DSA allowing for platform blocking (via Mozilla)

Credits

First and foremost, I would like to thank everybody who provided feedback on this or previous episodes. You are very important to the continued success of this podcast!

This podcast is provided free of charge and free of obligations under the value-for-value model. However, as a freelance journalist volunteering my time to produce this show, I need your support. If you like my work and want to make sure The Private Citizen keeps going, please consider joining my Patreon.


Showrunners

  • Sir Galteran

Executive Producers

  • Butterbeans
  • Jaroslav Lichtblau
  • Rizele
  • Sandman616

Supervising Producers

avis, Bennett Piater, Dave, ikn, Jackie Plage, Jonathan M. Hethey, krunkle, Michael Mullan-Jensen, Tobias Weber

Producers

Andrew Davidson, astralc, Barry Williams, Cam, Captain Egghead, Dirk Dede, Fadi Mansour, Florian Pigorsch, Joe Poser, MrAmish, RJ Tracey, Robert Forster

Associate Producers

D, Jonathan, Juhan Sonin, Kai Siers, RikyM, Steve Hoos, Vlad


Thanks to Bytemark, who are providing the hosting and bandwidth for this episode’s audio file.

The show’s theme song is Acoustic Routes by Raúl Cabezalí, licensed via Jamendo Music. This episode’s ending song is Night Stalker by Wave Saver, licensed via Epidemic Sound.

Podcast cover art photo by GegenWind.