Featured Expert Contributor, First Amendment
Jeremy J. Broggi is a partner with Wiley Rein LLP in Washington, DC.
Last week, the Eleventh Circuit held that Florida likely violated the First Amendment when it sought to restrain perceived “leftist bias” in the content moderation policies of “Big Tech” social-media companies. The decision, NetChoice, LLC v. Attorney General of Florida, joins others highlighted in these pages that view social-media companies as private entities protected by the First Amendment rather than public or quasi-public entities bound to respect users’ First Amendment rights.
Florida enacted the statute at issue in the case in 2021. The statute’s enacted findings declare that social-media platforms “have unfairly censored, shadow banned, deplatformed, and applied post-prioritization algorithms to Floridians” and that restriction of these practices is necessary to “preserv[e] first amendment protections for all Floridians.” Toward that end, the statute adopted numerous substantive provisions that, as summarized by the Eleventh Circuit, would “prohibit certain social-media companies from ‘deplatforming’ political candidates under any circumstances, prioritizing or deprioritizing any post or message ‘by or about’ a candidate, and, more broadly, removing anything posted by a ‘journalistic enterprise’ based on its content.”
The Eleventh Circuit began its analysis by finding it “substantially likely that social-media companies—even the biggest ones—are ‘private actors’ whose rights the First Amendment protects.” Next, the court concluded that “so-called ‘content-moderation’ decisions” in fact “constitute protected exercises of editorial judgment” akin to the editorial judgment traditionally exercised by newspapers, cable television operators, and others. From there, it was a small step for the court to conclude that “the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden” that editorial judgment. (The court did not find unconstitutional certain disclosure provisions also contained in the Florida law.)
Perhaps the most interesting aspect of the Eleventh Circuit’s decision is its rejection of Florida’s classification of the largest social-media companies as common carriers, notwithstanding their status as private actors. As Justice Thomas explained last year, “our legal system and its British predecessor have long subjected certain businesses, known as common carriers, to special regulations, including a general requirement to serve all comers.” According to Justice Thomas, that historical tradition may open the door to requiring social-media companies to treat all speech equally because, “[i]n many ways, digital platforms that hold themselves out to the public resemble traditional common carriers,” and because, in the context of First Amendment analysis, “regulations that might affect speech are valid if they would have been permissible at the time of the founding.”
The Eleventh Circuit rejected the common carrier argument for two reasons. First, the court found that “social-media platforms don’t serve the public indiscriminately but, rather, exercise editorial judgment to curate the content that they display and disseminate.” Second, the court found that because social-media platforms exercise editorial judgment, Florida could not remove that power through “legislative designation … as common carriers.”
The Eleventh Circuit did not explain how these conclusions square with the federal statute commonly referred to as “Section 230.” Section 230 doesn’t treat social media and other internet companies as the publisher or speaker of the information they distribute. The law thus exempts those entities from potential liability for distributing defamatory or obscene materials. This provision of federal law appears to sit somewhat uncomfortably with the Eleventh Circuit’s reasoning. If, as the Eleventh Circuit found, social media companies cannot be treated as common carriers because their distribution decisions are speech, then, some argue, it makes little sense to exempt social media companies from potential liability for the consequences of those same distribution decisions on the grounds that they are not speech.
These questions are unlikely to disappear anytime soon. And that is not surprising. The philosophical commitment underlying the First Amendment—that is, the belief that, as Justice Holmes memorably put it, “the best test of truth is the power of the thought to get itself accepted in the competition of the market”—is broader than the narrow confines of the state action doctrine. Thus, the ideal solution to difficult questions about private censorship of unpopular or disfavored viewpoints is not legislative or judicial action but a robust societal commitment to the proposition that truth emerges from free debate that grants even offensive and wrong views a fair hearing. In the absence of that widely shared societal commitment, questions about private moderation of speech—on social media platforms and elsewhere—are likely to persist.