Safety or Freedom?

Remember Section 230, the U.S. law that shields internet platforms from responsibility for the content their users post, sometimes to disastrous effect? We dropped an episode about it earlier this month, so if you haven’t heard it yet, do it now. I’ll wait.

Section 230 is credited for allowing the internet to be what it is; cases were brought against it, and everyone worried that the U.S. Supreme Court would strike it down. This, experts argued, would break the world wide web as we know it. There was a collective sigh of relief when the Justices left it in place.

Case closed, right? Wrong.

Unlike U.S. law, the world wide web is… worldwide. Sites we use in the U.S. serve billions of people across the globe. Guess who else uses Facebook, Twitter (sorry, X), TikTok, Grindr, Amazon: Europeans. And Europe really, really cares about keeping its people safe.

The group of 27 countries is a pioneer of internet regulation — a phrase that, to U.S. ears, can sound like an oxymoron. But following their rules can force such huge changes, the vast majority of sites just implement them globally. You know how you have to approve which cookies can spy on you when you get on a new site? That’s the EU’s General Data Protection Regulation (GDPR). By giving Europeans more control over their privacy, they gave it to the rest of the world, too.

And here comes another round of EU Laws That Protect Humans On The Internet: the Digital Services Act (DSA) rollout began last week, with full implementation by 2024. Fun fact: it’s trying to address many of the issues of Section 230. And like GDPR, it’s already affecting U.S. users.

The DSA holds internet platforms accountable for the content users post on them. The EU believes that all “intermediaries” — sites that provide goods, content and services — should foster an environment that respects the rights of users. This law has created an obligation to moderate. Break the rules and companies will face fines of six percent of their global revenue, or even EU-wide bans.

Platforms must facilitate reports of harmful content, and act on them. They must allow their users, not the algorithms, to rule their feeds (Facebook brought back its chronological feed last month, for everyone, to get a head start). By the way, they’ll have to share said algos with EU authorities. Kids will no longer be exposed to suggested content based on their activity. Once a year, companies will assess the risk on their platforms and share how effective their safety tools are. They’re also obligated to let researchers and internet watchdogs access their data—I’m eager to see all the science this will yield.

The law is going after the usual suspects: fake news, child pornography, racist attacks, harassment. Sites will need to crack down on content that gets in the way of public health, electoral processes, and fundamental rights. The EU promises that they won’t be gentle if platforms mess up.

Europe’s approach suggests that they don’t see freedom and safety as opposing forces. They insist that the law will target what is already illegal, so the internet won’t be less free than real life. Also, if you disagree about a content removal decision, you’ll be able to challenge it more easily.

Giants like Google, TikTok, and Meta are complying. Amazon is suing the EU. Elon Musk’s Twitter — sorry, X — was procrastinating, but the EU Internal Market Commissioner Thierry Breton warned that they’d better get in line before Europe’s election season begins, or else. Musk swore they’d be ready.

If these announcements are anything to go by, adapting to the new rules represents a large enough operational shift that the companies might apply them wholesale rather than by location. Long term, many believe the DSA will be a model for other countries to follow and fine-tune. Who knows. I, for one, can’t wait to see how it’s going to play out for Section 230.

Next: the EU takes on artificial intelligence. Stay tuned.

Previous
Previous

The Unexpected Joy of Speaking Backwards

Next
Next

Fighting Lyme on All Fronts