
Social media regulation in the US: Lessons from Europe
Authors
The de-platforming of former President Donald Trump following the storming of the US capitol on 6 January 2021 has brought renewed attention to the debate on the regulation of social media. As highlighted in a recent event hosted by Chatham House with Professor Jack Snyder of Columbia University and Suzanne Nossel of PEN America, concerns regarding the spread of misinformation online, but also with the potential negative side effects of over-regulation, are complex and manifold.
As polarization reaches new heights in the US, more attention is being paid to the role that platforms like Facebook and Twitter may play in increasing the divisions through the spread of misinformation. This was precisely the focus of the most recent congressional hearing of 26 March 2021 with the Big Tech CEOs. While government regulation may not be the ‘silver bullet’ to solve these challenges given the potential negative consequences of misguided policies, lawmakers in DC appear set on rolling out new regulation for internet companies.
New regulation on the horizon: ongoing debate and tensions
The regulatory debate in the US is heavily centered on Section 230 of the Communications Decency Act, the piece of legislation that grants social media companies indemnity from the content users publish on their platforms. What is less known about Section 230 is that it also enables platforms to moderate speech. This includes not only illegal speech, but also legal -- yet potentially harmful -- content. This is precisely what has been at the heart of the debate with the recent de-platforming of former President Trump: how social media platforms use their discretion to choose what legal content they take down.
As seen in the recent congressional hearing, there is bipartisan agreement – albeit for different reasons – to reform Section 230, with some proposals even calling to repeal intermediary liability protections as whole. This is also shown by the 29 bills presented around Section 230 from both sides of the aisle since May 2020, a conjunction of which seems bound to go forward in 2021.
Democrats argue for greater responsibility on the side of platforms for the content they allow on their services. Republicans, on the other hand, complain of censorship of conservative voices, and call for less aggressive content oversight. Both sides agree on the need for greater transparency on content removal decisions and practices.
Section 230 has been famously praised for safeguarding innovation on the internet, and any modifications need to be carefully crafted. On the one hand, removing platforms’ liability protections could pose a great threat to medium and small content carriers that could not bear potential litigation costs. There is also evidence that risks of being penalized render platforms more prone to content over-removal with subsequent consequences on freedom of speech. Conversely, platforms’ content moderation practices are protected by the First Amendment, and no regulation could effectively oblige them to carry conservative speech.
In a perhaps unsurprising move, Facebook -- which had openly called for updated regulation -- hinted at ways to modify section 230 in its written testimony submitted ahead of the 26 March hearing. Facebook’s proposal is to require large companies to ‘demonstrate that they have systems in place for identifying unlawful content and removing it’ to qualify for immunity. The move has been criticized for seeking to render Facebook’s current standards the norm.
The debate has not stopped at Section 230. The recent congressional hearing saw multiple House representatives calling for the reform of antitrust regulation to reign in social media.
Beyond questioning the size and power of big tech companies, proponents of new competition safeguards call for a greater number of platforms, with different moderation policies to choose from. What antitrust regulation applied to the tech sector would look like remains to be seen.
Unlike the EU regulatory debate, privacy and data protection are not considered as immediate areas to undergo regulatory reform, in spite of being at the core of social media business models and how content is catered to users.
Self-regulation efforts
Companies such as Facebook and Twitter are also grappling with the legitimacy of their speech removal practices and have taken steps to improve how they undertake content moderation.
Twitter’s ‘Birdwatch’ launched in 2021 is intended to be a ‘community-driven approach’ to combat misinformation on the platform. Twitter has also committed to publishing the algorithms behind Birdwatch once the feature is live to allow outside observers to comment on weak spots and suggest improvements to the application. On a more ambitious effort, during the 26 March congressional hearing, Jack Dorsey repeatedly alluded to the company’s development of BlueSky, a decentralized standard for social media that would allow recommendation algorithms to become open and transparent.
Facebook meanwhile has launched its own oversight board, made up of individuals from various industries based around the world, with the intended purpose of debating controversial content and advising what to keep on the platform and what to remove. The board began releasing decisions in January 2021 and is soon due to pronounce itself on the suspension of Trump’s accounts.
The question remains how updated tech regulation in the US will coexist with social media’s self-regulatory practices. Given the nature of content moderation, crafting an updated regulatory framework will require the collaboration between private platforms and regulators.
Transatlantic lessons
Regulation from Europe may offer some useful insights on how to tackle social media regulation in the US. The Democracy Action Plan as well as forth-coming legislative proposals -- including the UK’s Online Safety Bill and the EU’s Digital Services Act and the Digital Markets Act -- are seeking to tackle manipulation, harmful content, misinformation and competition.
Focus on process, not content. One useful take shared across the UK and the EU is the so-called ‘systems-based approach’ to regulation, which removes the focus from the veracity and legality of specific content and centres instead on the rules and processes established by platforms to regulate content. This would prove particularly useful in the US context where debates are heavily influenced by First Amendment concerns and how to handle legal, but harmful content.
Guidelines and monitoring, with flexibility for platforms. The focus on processes rather than content, also enables for co-regulation where companies have flexibility to design policies, under clear guidelines set forth by public legislators with active oversight. A recent piece by Harriet Moynihan, Senior Research Fellow at Chatham House, discusses how the co-regulatory model underway in Europe helps prevent platforms from ‘playing God on content moderation decisions without reference to any regulatory framework.’
Avoiding one-stop, one-size-fits-all regulation. Daphne Keller, Director of the Platform Regulation program at Stanford’s Cyber Policy Center, touches on aspects of EU regulation that the US could potentially replicate. In addition to following the EU’s example of creating separate laws to deal with concerns of privacy, speech and competition, instead of attempting to tackle them together, Keller also recommends keeping in mind company size when developing regulation. Any updated US regulation needs to account for differences in company resources and protect smaller companies from being overwhelmed by legislation intended for bigger companies by holding larger technology platforms to a different set of rules.
Despite the complexities, the US is coming to terms with the need to reform tech regulation to prevent technology companies from being the sole determiners of which content is deemed suitable and which is not. Lessons from Europe can help the US frame the discussion, and contribute to a consistent, transatlantic approach to content regulation.