Florida Social Media Censorship Law ENJOINED–NetChoice v. Moody
Highlights of the opinion: the judge struck down the entire law (other than the findings and antitrust parts) as a content-based restriction that does not survive strict scrutiny. The judge added that the law doesn’t survive intermediate scrutiny, which narrows the flexibility of Florida or other legislatures to try different regulatory ideas to get at the same outcomes. The judge also says that the law’s restrictions on good faith content moderation conflict with Section 230.
Here’s the court’s conclusion:
The legislation now at issue was an effort to rein in social-media providers deemed too large and too liberal. Balancing the exchange of ideas among private speakers is not a legitimate governmental interest. And even aside from the actual motivation for this legislation, it is plainly content-based and subject to strict scrutiny. It is also subject to strict scrutiny because it discriminates on its face among otherwise-identical speakers: between social-media providers that do or do not meet the legislation’s size requirements and are or are not under common ownership with a theme park. The legislation does not survive strict scrutiny. Parts also are expressly preempted by federal law.
The rest of this post takes a closer look at the opinion:
First Amendment
The court says that Internet services “manage” user content, and “[i]n the absence [of] curation, a social-media site would soon become unacceptable—and indeed useless—to most users.” These content moderation decisions require editorial judgment, and the court says that’s exactly what the Florida government wanted to stop. The state’s intent has always been transparently censorial, and the judge calls them out for it:
the State has asserted it is on the side of the First Amendment; the plaintiffs are not. It is perhaps a nice sound bite. But the assertion is wholly at odds with accepted constitutional principles….
In other words, people who try to cloak censorial regulations as pro-free speech are actually trampling on the Constitution.
The judge starts by rejecting three arguments for reduced First Amendment scrutiny.
First, the judge dismisses any state action argument. “[W]hatever else may be said of the providers’ actions, they do not violate the First Amendment.” The judge didn’t cite any cases here, but I’ve blogged many cases rejecting state action arguments (see this post and the many linked cases at the bottom).
Second, the judge says “the First Amendment applies to speech over the internet, just as it applies to more traditional forms of communication” (cite to Reno v. ACLU’s classic line that the First Amendment cases “provide no basis for qualifying the level of First Amendment scrutiny that should be applied” to the Internet).
Third, citing Miami Herald v. Tornillo, the judge says “the concentration of market power among large social-media providers does not change the governing First Amendment principles.” The judge then follows up with this powerful factual gem:
Whatever might be said of the largest providers’ monopolistic conduct, the internet provides a greater opportunity for individuals to publish their views—and for candidates to communicate directly with voters—than existed before the internet arrived
This point deserves repetition. We have more speech freedom today than we had before the Internet, and we have more speech freedom today than we’ll have after regulators “fix” the Internet. Digital natives may not appreciate just how much speech freedom we enjoy right now despite the scary-seeming editorial powers of Internet giants, but this boomer judge knows firsthand.
Citing Tornillo, Hurley, and PG&E v PUC, the judge summarizes: “a private party that creates or uses its editorial judgment to select content for publication cannot be required by the government to also publish other content in the same manner—in each of these instances, content with which the party disagreed.” However, the judge says social media platforms exercise their editorial judgment differently because the “content on their sites is, to a large extent, invisible to the provider.” This concept of “invisibility” is confusing. I think what the judge is trying to say is that some Internet services rely on automated content screening or removal rather than human screening or removal. For example, the judge says “Something well north of 99% of the content that makes it onto a social media site never gets reviewed further”–this factual finding might apply to Facebook, YouTube, or Google Search, but the law applies to services that do more pre- or post-publication review of user content than the big guys; and the law applies to Internet services even if they prescreen 100% of their content (in which case none of the content would be “invisible,” to use the judge’s rhetoric). I didn’t understand why the judge made this factual assumption, what he meant by “invisibility.” or why invisibility would change the First Amendment analysis.
Nevertheless, the judge says:
the targets of the statutes at issue are the editorial judgments themselves. The State’s announced purpose of balancing the discussion—reining in the ideology of the large social-media providers—is precisely the kind of state action held unconstitutional in Tornillo, Hurley, and PG&E.
Florida cited Rumsfeld and Pruneyard. The court responds that the law:
explicitly forbid social media platforms from appending their own statements to posts by some users. And the statutes compel the platforms to change their own speech in other respects, including, for example, by dictating how the platforms may arrange speech on their sites. This is a far greater burden on the platforms’ own speech than was involved in FAIR or PruneYard.
It’s odd to refer to speech “arrangement,” but the point is clear: usurping Internet services’ editorial discretion is nothing like the speech incursions in Rumsfeld or Pruneyard. Indeed, I wish the folks advocating for censorship of Internet services would stop invoking real property analogies. They don’t fit the Internet scenario at all, and courts aren’t buying them.
The court concludes that strict scrutiny applies:
it cannot be said that a social media platform, to whom most content is invisible to a substantial extent, is indistinguishable for First Amendment purposes from a newspaper or other traditional medium. But neither can it be said that a platform engages only in conduct, not speech.
Not surprisingly, the law fails strict scrutiny because they are content-based (indeed, the state’s brief didn’t argue the law could survive strict scrutiny). “The Florida statutes at issue are about as content-based as it gets,” including the special treatment for political candidates, material “about” a candidate, and content from journalistic enterprises. Indeed, the law was viewpoint-based because the record had many statements showing partisan bias animating the law, such as when the Florida Lt. Governor said the law fights the “radical leftist narrative” of Internet services. The judge appropriately mocks the exclusion for theme parks.
The judge is also puzzled why the law distinguishes between Internet services based on their size, saying “discrimination between speakers is often a tell for content discrimination.” Indeed, the judge suggests all size-based distinctions among Internet services should trigger strict scrutiny (“the application of these requirements to only a small subset of social-media entities would be sufficient, standing alone, to subject these statutes to strict scrutiny”). (As the judge notes, there is also a problematic size-based distinction in the definition of “journalistic enterprise”). This is a potentially huge ruling, because it could apply to every single regulatory proposal targeting “Big Tech” for harsher regulation than other Internet services. For more on the merits and mechanics of making size-based distinctions in Internet regulations, see my article with Jess Miers, Regulating Internet Services by Size.
Because Florida didn’t argue that the law survived strict scrutiny, the judge’s analysis is brief. The judge questions the state’s interest: “leveling the playing field—promoting speech on one side of an issue or restricting speech on the other—is not a legitimate state interest.” This gets to the core of every effort to treat Internet services as common carriers, which by their nature are designed to treat all content equally. It’s HUGE that the court says those don’t advance a legitimate state interest. It could mean the laws wouldn’t survive even rational basis scrutiny.
The judge also questions if the law’s tailoring is narrow: “this is an instance of burning the house to roast a pig” (invoking the common meme of porcine incineration and residential arson that runs throughout First Amendment jurisprudence).
As for the statute’s severability clause, which the state repeatedly noted when their arguments sucked, the judge says simply: “There is nothing that could be severed and survive.”
As a welcome bonus, the judge says the law would not survive intermediate scrutiny for lack of tailoring. Indeed, “some of the disclosure provisions seem designed not to achieve any governmental interest but to impose the maximum available burden on the social media platforms.”
The judge does not rely on vagueness to strike down the law, but does note the fines in the law “seem more punitive than compensatory,” the requirement of “consistent” content moderation conflicts with parts of the law that expressly require inconsistent treatment for some content, and the protection for content “about a candidate” is “incomprehensible.”
In further support of the injunction, “the plaintiffs’ members will sometimes be compelled to speak and will sometimes be forbidden from speaking, all in violation of their editorial judgment and the First Amendment. This is irreparable injury.”
Section 230
The judge says the ban on candidate deplatforming conflicts with Section 230(c)(2)(A) because it takes away the Internet services’ power to respond to content from the candidate that the service, in good faith, considers objectionable. (The court adds that federal law, not state, defines what “good faith” means in Section 230(c)(2)(A), and mistaken content moderation decisions can still be in good faith, On the latter point, the court could have cited e360Insight, LLC v. Comcast Corp., 546 F. Supp. 2d 605 (N.D. Ill. 2008)).
Other conflicts with Section 230: the private right of action, the requirement for consistent content moderation (the court cites Domen v. Vimeo), and the requirement to give notice before moderating content. The latter point could be huge because this suggests that other states’ efforts to mandate digital due process elements, such as notice and appeal, may be preempted by Section 230.
Antitrust-Based Blocklist
The court says that provision “raises issues under both state and federal law, but it poses no threat of immediate, irreparable harm to social media platforms.” The judge doesn’t explain more, but the law requires the state to publish its first blocklist on January 1, 2022, giving the plaintiffs more time to enjoin it.
Other Implications
There are many parts of this ruling that, if they withstand appeal, cast serious doubts on many other legislative efforts being considered in Congress and around the country. At its core, this opinion might stand for the propositions that (1) common carriage-like regulation can’t apply to Internet services, and (2) social media exceptionalist laws can’t survive intermediate scrutiny. If those legislators cared about making good policy, this opinion would prompt them to redirect their energies to higher-value efforts for constituents.
Meanwhile, if I were a Florida resident, I would be embarrassed that my state legislature passed such a garbage law and then spent government funds defending it so futilely. Florida residents, you deserve better governance than you’re getting. Remember that when you vote, but you might also phone your legislators–especially those who voted in favor of this garbage–to tell them to DO BETTER.
Case citation: NetChoice, LLC v. Moody, 2021 WL 2690876 (N.D. Fla. June 30, 2021)
Case library (see also NetChoice’s library)
- Florida’s answer to the complaint.
- District court June 30, 2021 opinion enjoining most of the law. Blog post.
- Amicus Brief opposing the preliminary injunction from Leonid Goldstein. Blog post.
- Blog post recapping the Preliminary Injunction Hearing Against Florida’s Social Media Censorship Law
- Plaintiffs’ Reply Brief to Florida’s Opposition. Blog post.
- Florida Opposition to Preliminary Injunction. Blog post.
- Amicus briefs in support of the preliminary injunction
- Blog post on the amicus briefs
- Preliminary injunction brief (if you get an error message downloading one of the files below, hit refresh)
- Blog post on preliminary injunction brief
- Netchoice v. Moody complaint.
- Text of SB 7072. Blog post on the statute.
Pingback: Florida Law Designed to Punish Twitter Enjoined | mtanenbaum()
Pingback: Comments on Trump's Lawsuits Against YouTube, Facebook, and Twitter - Technology & Marketing Law Blog()