This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday,sign up here.
If you use Google, Instagram, Wikipedia, or YouTube, you’re going to start noticing changes to content moderation, transparency, and safety features on those sites over the next six months.
Why? It’s down to some major tech legislation that was passed in the EU last year but hasn’t received enough attention (IMO), especially in the US. I’m referring to a pair of bills called the Digital Services Act (DSA) and the Digital Markets Act (DMA), and this is your sign, as they say, to get familiar.
The acts are actually quite revolutionary, setting a globalgoldstandardfor tech regulation when it comes to user-generated content. The DSA deals with digital safety and transparency from tech companies, while the DMA addresses antitrust and competition in the industry. Let me explain.
A couple of weeks ago, the DSA reached a major milestone. By February 17, 2023, all major tech platforms in Europe were required to self-report their size, which was used to group the companies in different tiers. The largest companies, with over 45 million active monthly users in the EU (or roughly 10% of the EU population), are creatively called “Very Large Online Platforms” (or VLOPs) or “Very Large Online Search Engines” (or VLOSEs) and will be held to the strictest standards of transparency and regulation. The smaller online platforms have far fewer obligations, which was part of a policy designed to encourage competition and innovation while still holding Big Tech to account.
“If you ask [small companies]for example, to hire 30,000 moderators, you will kill the small companies,” Henri Verdier, the French ambassador for digital affairs, told me last year.
So what will the DSA actually do?So far, at least 18 companies have declared that they qualify as VLOPs and VLOSEs, including most of the well-known players like YouTube, TikTok, Instagram, Pinterest, Google, and Snapchat. (If you want a whole list, London School of Economics law professor Martin Husovec hasa great Google docthat shows where all the major players shake out and has written anaccompanying explainer.)
The DSA will require these companies to assess risks on their platforms, such as the likelihood of illegal content or election manipulation, and make plans to mitigate those risks with independent auditors to verify safety. Smaller companies (those with under 45 million users) will also have to meet new content moderation standards that include “expeditiously” removing illegal content once flagged, notifying users of that removal, and increasing enforcement of existing company policies.
Proponents of the legislation say the bill will help bring an end to the era of tech companies’ self-regulating. “I don’t want the companies to decide what is and what isn’t forbidden without any separation of power, without any accountability, without any reporting, without any possibility of contesting,” Verdier says. “It’s very dangerous.”
That said, the billmakes it clear that platformsaren’t liable for illegal user-generated content, unless they are aware of the content and fail to remove it.
Perhaps most important, the DSA requires that companies significantly increase transparency, through reporting obligations for “terms of service” notices and regular, audited reports on content moderation. Regulators hopethis will have widespread impactsOn public conversations around societal risks of big tech platforms like hate speech, misinformation, and violence.
What will you notice?You will be able to participate in content management decisions that companies make and formally contest them. The DSA will effectively outlaw shadow banning (the practice of deprioritizing content without notice), curb cyberviolence against women, and ban targeted advertising for users under 18. There will also be a lot more public data around how recommendations algorithms, advertisements, content, and account management work on the platforms, shedding new light on how the biggest tech companies operate. Historically, tech companies have been veryhesitant to share platform data with the publicoreven with academic researchers.
What’s next?Now the European Commission (EC) will review the reported user numbers, and it has time to challenge or request more information from tech companies. One noteworthy issue is that porn sites were omitted from the “Very Large” category, which Husovec called “shocking.” He told me he thinks their reported user numbers should be challenged by the EC.
Once the size groupings are confirmed, the largest companies will have until September 1, 2023, to comply with the regulations, while smaller companies will have until February 17, 2024. Many experts anticipate that companies will roll out some of the changes to all users , not just those living in the EU. With Section 230 reform looking unlikely in the US, many US users will benefit from a safer internet mandated abroad.
What else I am reading about
More chaos, and layoffs, at Twitter.
- Elon has once again had a big news week after he laid off another 200 people, or 10% of Twitter’s remaining staff, over the weekend. These employees were presumably part of the “hard core” cohort who had agreed to abide by Musk’s aggressive working conditions.
- NetBlocks has reported four major outagesof the site since the beginning of February.
Everyone is trying to make sense of the generative AI hoopla.
- The FTC released a statement warning companies not to lie about the capabilities of their AIs. I also recommend reading this helpful piece from my colleague Melissa Heikkilä abouthow to use generative AI responsiblyand this explainer about10 legal and business risks of generative AIby Matthew Ferraro from Tech Policy Press.
- The dangers of the tech are already making news.This reporter broke into his bank accountusing an AI-generated voice.
There were more internet shutdowns than ever in 2022, continuing the trend of authoritarian censorship.
- This week, Access Now published itsannual report that tracks shutdownsaround the world. India again,led the list with most shutdowns.
- Last year, Ispoke with Dan Keyserling,who worked on the 2021 report, to learn more about how shutdowns are weaponized. During our interview, he told me, “Internet shutdowns are becoming more frequent. More governments are experimenting with curtailing internet access as a tool for affecting the behavior of citizens. The costs of internet shutdowns are arguably increasing both because governments are becoming more sophisticated about how they approach this, but also, we’re living more of our lives online.”
What I learned this week
Data brokers are selling mental-health data online, according toanew reportfrom the Duke Cyber Policy Program. The researcher asked 37 data brokers for mental-health information, and 11 responded willingly. The report details how these select data brokers offered to sell information on depression, ADHD, and insomnia with little restriction. Some of the data was tied to people’s names and addresses.
In an interviewwithPBS, project lead Justin Sherman explained, “There are a range of companies who are not covered by the narrow health privacy regulations we have. And so they are free legally to collect and even share and sell this kind of health data, which enables a range of companies who can’t get at this normally—advertising firms, Big Pharma, even health insurance companies—to buy up this data And to do things like run ads, profile consumers, make decisions about health plan pricing. And the data brokers enable these companies to get around health regulations.”
On March 3, theFTC announced a banPreventing the online mental health company Better Help from sharing people’s data with other companies.