Jacob Mchangama, author of Free Speech: A History from Socrates to Social Media (2022) is warning us of the EU’s well-intended “Digital Services Act, enacted in November 2022. The stated purpose of the Act is to require social media platforms to
evaluate and remove illegal content, such as “hate speech,” as fast as possible. It also mandates that the largest social networks assess and mitigate “systemic risks,” which may include the nebulous concept of “disinformation.”
Mchangama is concerned that the EU is ignoring the likely consequences of the Act:
The European law, by contrast, may sound like a godsend to those Americans concerned about social media’s weaponization against democracy, tolerance and truth after the 2020 election and the Jan. 6 insurrection. Former Secretary of State Hillary Clinton enthusiastically supported the European clampdown on Big Tech’s amplification of what she considers “disinformation and extremism.” One columnist in the New Yorker hailed the Digital Services Act as a “road map” for “putting the onus on social-media companies to monitor and remove harmful content, and hit them with big fines if they don’t.”
But when it comes to regulating speech, good intentions do not necessarily result in desirable outcomes. In fact, there are strong reasons to believe that the law is a cure worse than the disease, likely to result in serious collateral damage to free expression across the EU and anywhere else legislators try to emulate it.
Removing illegal content sounds innocent enough. It’s not. “Illegal content” is defined very differently across Europe. In France, protesters have been fined for depicting President Macron as Hitler, and illegal hate speech may encompass offensive humor. Austria and Finland criminalize blasphemy, and in Victor Orban’s Hungary, certain forms of “LGBT propaganda” is banned.
The Digital Services Act will essentially oblige Big Tech to act as a privatized censor on behalf of governments — censors who will enjoy wide discretion under vague and subjective standards. Add to this the EU’s own laws banning Russian propaganda and plans to toughen EU-wide hate speech laws, and you have a wide-ranging, incoherent, multilevel censorship regime operating at scale.
The obligation to assess and mitigate risks relates not only to illegal content, though. Lawful content could also come under review if it has “any actual or foreseeable negative effect” on a number of competing interests, including “fundamental rights,” “the protection of public health and minors” or “civic discourse, the electoral processes and public security.”
The DSA appears to be a blank check written to powerful actors, inviting them vigorously assume the the role of nannies for others, to make sure people in EU all talk properly to each other, as determined and enforced by governments. This is an invitation for powerful actors to embrace unrestrained government-enforced censorship. What could possibly go wrong?
Mchangama warns of the spill-over effect. The Act only applies to the EU on its face, which is bad enough, but it could affect those all over the world, including in the U.S.”
The European policies do not apply in the U.S., but given the size of the European market and the risk of legal liability, it will be tempting and financially wise for U.S.-based tech companies to skew their global content moderation policies even more toward a European approach to protect their bottom lines and streamline their global standards. . . . The result could subject American social media users to moderation policies imposed by another government, constrained by far weaker free speech guarantees than the 1st Amendment.