31 Bogus Passages from Florida’s Defense of Its Censorship Law–NetChoice v. Moody

Florida filed its opposition brief to the NetChoice/CCIA request to preliminarily enjoin SB 7072, the Florida censorship law. This post critiques some of the brief’s worst parts.

As I’ve said before, writing blog posts like this isn’t fun for me. Instead, I get agitated reading statements where the drafters either are misrepresenting the law’s scope or don’t understand what the law actually says and does. Some readers might enjoy my snarky rejoinders, but don’t let the superficial atmospherics obscure the real issue. The Florida law poses an existential threat to the Internet, and the brief supporting the law (intentionally or negligently) cuts dozens of intellectual corners to advance that goal. I don’t find any humor in that at all.

Overview

Everything about the brief is terrible. What stands out most to me is how many weak arguments made it into the brief instead of being edited out. Florida’s smarter move would have been to concede the unconstitutionality of some pieces so that it could more vigorously defend the rest. By trying to get the whole enchilada, the brief made some mockable arguments that will heighten the judge’s skepticism. Florida also repeatedly conceded that some pieces of the law might need to be severed, but drawing those nuanced laws is probably more than the judge has time to do given the ridiculously compressed schedule dictated by Florida’s unreasonable July 1 deadline. By tacitly admitting the law’s weaknesses without removing them from the case, I think the state gave the judge more reasons to put everything on hold until the law can be more carefully scrutinized.

Substantively, the brief analyzed Section 230 before the First Amendment, even though the plaintiffs spent more of their energy on the First Amendment arguments. I guess Florida thought it should put its “strongest” arguments first, but that won’t fool the judge. On the First Amendment, the state argued that the law should be subject to intermediate scrutiny. It did not try to defend against strict scrutiny. If the judge decides that strict scrutiny applies–which I think it should–the state apparently conceded the bill can’t survive it.

The brief also doesn’t systematically defend every piece of the bill. As just one example, the brief never defends the bill’s antitrust blocklist. I wonder if the judge will interpret the brief’s silence on those topics as concessions that they can’t be defended.

A Note About the Bill’s Scope. One of the great unanswered issues about the bill is who it actually covers. The defined term “social media platform” is misleading because it makes you think the bill is focused on Facebook and similar services. The law covers entities far beyond that.

“Social media platform” is defined as “any information service, system, Internet search engine, or access software provider.” Anyone who understands the Internet would not consider a “search engine” to be a “social media platform,” but the bill does. Even more vexingly, the bill includes “access software providers.” This term isn’t defined in the bill, but it is defined in 47 USC 230 to include anti-threat software vendors, such as anti-spam/anti-spyware vendors as well as parental control software. These software vendors use blocklists to protect their users, so applying the same rules to them and Facebook creates many corner cases and interpretation conundrums.

To qualify as a social media platform, an entity must have either $100M in annual revenue or “100 million monthly individual platform participants globally.” The latter standard is nonsensical (see this article for an explanation of why), and it probably doesn’t mean “monthly active users” (MAUs) because the statute uses that term elsewhere.

So who is actually covered by the law? To speed up the conversation, using 100M MAUs as a proxy (because I don’t know how else to measure things), here is Wikipedia’s “List of social platforms with at least 100 million active users“:

Facebook, YouTube, WhatsApp, Facebook Messenger, Instagram, WeChat, TikTok, Douvin, QQ, Telegram, Snapchat, Weibo, QZone, Kuaishou, Pinterest, Reddit, Twitter, Quora, Skype, Tieba, Viber, LinkedIn, imo, Line, PicArts, Likee, Discord, Zoom, Teams, Google Meet, Apple iMessage, FaceTime.

Some additional suspects are Tumblr, Vimeo and Twitch (source).

I haven’t tried to identify non-social entities. Likely suspects are major search engines (like Google and Bing), online marketplaces (like Amazon, eBay, Etsy, and Shopify), any major media outlets with any UGC functionality (such as the NY Times), and many others (two that come immediately to mind are Yahoo and Craigslist). The total list of regulated entities should be in the hundreds.

How many of the services on Wikipedia’s list have you NOT heard of? Some of them are international, and they may not do business in Florida. Still, consider US services like Quora, PicArts, and Discord. As you read some of the ridiculous and deceptive ways the Florida brief stereotypes the nature of “social media platforms,” keep these less-obvious suspects in mind.

The Lowlights

The rest of this blog post explains why 31 specific statements from the Florida brief are wrong. I could have deconstructed pretty much every sentence in the 61 page brief, but this post already required more time than I had available.

From the intro:

1. “the social media behemoths’ power to silence both on their platforms and throughout society has given rise to a troubling trend where a handful of corporations control a critical chokepoint for the expression of ideas. Such unprecedented power of censorship is especially concerning today…”

To be clear, “a handful of [media] corporations” have “controlled critical chokepoints” basically forever. Pick your media niche and an oligopoly runs it (except for local newspapers, which were a de facto monopoly for decades). So this argument is not unique to social media, but the law subjects Internet services to exceptional treatment.

Also, as explained above, the law does not regulate only “a handful of companies.” More like hundreds.

2. “the Act does not suppress, but rather promotes, speech”

This is the statute’s Big Lie. Government compulsion to publish unwanted content is CENSORSHIP.

3. “It leaves users free to speak, to share, or to block any content they do not wish to see”

This is intellectually dishonest given that the statute expressly reaches anti-threat software programs that users rely upon to block content they don’t wish to see.

From the Section 230 discussion:

4. The brief says Section 230(c)(1) only applies to leave-up decision, not removal decisions. Any other reading makes Section 230(c)(2)(A) superfluous.

Not this argument again. The Ninth Circuit has explained that Section 230(c)(2)(A) covers the situation where the defendant is an “information content provider,” while Section 230(c)(1) does not apply in this circumstance. See Barnes v. Yahoo and Fyk v. Facebook. In support of its argument, Florida cites the e-ventures case, but numerous other courts have repeatedly criticized that decision. For example, Murphy v. Twitter called it “unpersuasive.”

5. The brief notes that Zeran says Section 230 applies to “altering” content, but doing that would make the service an ICP.

This sounds like a gotcha, but it’s a stupid point that should have been cut. Services can “alter” content and still easily qualify for Section 230, such as making non-substantive formatting changes. Also, the Roommates.com opinion doesn’t support the Florida position. That case used the example of changes that flip the meaning (omitting the “NOT” in the statement “Joe is NOT a thief”) as disqualifying; but less substantive alterations to third-party content still qualify for Section 230.

6. “A mandate to disclose information to better inform and protect consumers does nothing to impede the ‘traditional editorial functions’ of a publisher.”

This is wrong, as I will explain in my next paper on validating transparency disclosures. See the preview in this 10 minute video.

7. “The Act’s mandate that social media platforms apply their standards for deplatforming, censorship, post-prioritization, and shadow-banning in a consistent manner limits how a social media platform decides ‘whether to publish, withdraw, postpone or alter content,’ but the ultimate content standards—and thus the exercise of any ‘traditional editorial functions’—remain entirely up to the platform.”

This is one of the many places where the brief makes an intellectually dishonest omission. The law literally says that Internet services have no editorial discretion to deplatform political candidates or moderate the content of journalistic enterprises, which the brief ignores by addressing a different piece of the law.

8. With respect to Section 230(c)(2)(A), “courts might ultimately conclude that a social media platform’s failure to follow its own content moderation standards or failure to explain removal of content is relevant to whether the platform acted in ‘good faith.'”

No, courts won’t. See, e.g.:

  • Domen v. Vimeo, Inc., 991 F.3d 66 (2d Cir. 2021): “the mere fact that Appellants’ account was deleted while other videos and accounts discussing sexual orientation remain available does not mean that Vimeo’s actions were not taken in good faith. It is unclear from only the titles that these videos or their creators promoted SOCE…Given the massive amount of user-generated content available on interactive platforms, imperfect exercise of content-policing discretion does not, without more, suggest that enforcement of content policies was not done in good faith.”
  • Holomaxx Technologies v. Microsoft Corp., 783 F. Supp. 2d 1097, 1105 (N.D. Cal. 2011): “Nor does Holomaxx cite any legal authority for its claim that Microsoft has a duty to discuss in detail its reasons for blocking Holomaxx’s communications or to provide a remedy for such blocking. Indeed, imposing such a duty would be inconsistent with the intent of Congress to ‘remove disincentives for the development and utilization of blocking and filtering technologies.’”

9. “Section 230(c)(2)(A) is best read to provide immunity only from damages and other monetary remedies—not from other remedies such as declaratory and injunctive relief.”

Another argument that should have been cut. Dozens of courts, including Hassell v. Bird, have applied Section 230 to requests for injunctive relief. See, e.g., Ian C. Ballon, 4 E-Commerce and Internet Law 37.05[8] (2020 update) (“the CDA’s application to injunctive relief should also not be controversial given the plain text of the statute and the manner in which it has been construed by courts….since a finding of liability is a precondition for final injunctive relief, subpart (c)(2) preempts both damage claims and injunctive relief”).

10. “if Plaintiffs were correct that Section 230 creates a broad law-free zone in which internet companies can censor however they like, even in bad faith, then serious questions would arise about whether their censorship constitutes state action.”

How can so much garbage fit into a single sentence?

  • Section 230 doesn’t create “a law-free zone.” Section 230(e) expressly excludes several categories of laws from its scope.  As I recently explained, anyone who uses the “law-free” characterization proves they haven’t read the statute, failed to understand it, or are making intellectually dishonest points.
  • Private publishers–including Internet services publishing third-party content–don’t “censor,” they exercise editorial discretion. Only state actors “censor.”
  • Even if true, statutory immunities don’t magically convert private actor beneficiaries into state actors. State action isn’t a switch that flips that easily.

11. “there is nevertheless state action to whatever extent Section 230 preempts Florida law”

It’s a huge lift to get a judge to accept such a radical concept with wide-ranging implications, so there’s no chance this will sway the judge on a PI motion. This argument should have been cut.

12. “Serious questions under the First Amendment would arise if, in this pre-enforcement facial challenge, the Court were to construe Section 230 to preempt Florida’s effort to promote the freedom of speech.”

As mentioned in point 2, this is doublespeak. Restricting the editorial discretion of Internet services does not “promote the freedom of speech.” It’s the opposite, a/k/a “CENSORSHIP.”

From the First Amendment discussion:

13. “In the main, the Act regulates the conduct of social media platforms with respect to their users’ engagement with their sites.”

Another example of intellectually dishonest omissions about the law’s scope. Also, regulating the “conduct” of how private publishers publish content to their audiences isn’t really a conduct restriction. It’s a speech restriction. For example, if a government actor tells a publisher that they can’t engage in the “conduct” of running their printing presses, it’s disingenuous to characterize that as just a conduct regulation.

14. “a reasonable user of a typical regulated social media platform would not identify the views expressed on the platform as those of the platform itself”

The brief tries to fit the law into the doctrine of compelled speech, using Facebook as its archetype. Compelled speech does not feel like the right metaphor because law restricts the service’s editorial discretion. For example, if a topically limited service can’t restrict off-topic posts, compelled speech is an unhelpful way to frame the problem.

Or consider imposing must-carry obligations on anti-threat software. Users of the software may not interpret the blocklists as the editorial “views of the platform,” but they will still blame the software for failing to do its job.

15. “The Act leaves social media platforms free to speak on their own behalf and make clear their own views….social media platforms remain free to speak with their own voice on any issue…”

This is a lie. The law defines impermissible “censorship” to include “post[ing] an addendum to any content or material posted by a user.” Then it says: “A social media platform may not take any action to censor…a journalistic enterprise based on the content of its publication or broadcast.” Please explain how this leaves Internet services free to “speak on their own behalf” and “make clear their own views.”

16. “Whatever the specifics of the content moderation strategy of a particular social media platform, such policies generally lack the highly selective nature of newspaper editing or even a parade.”

Facebook is the closest example to this statement, but even Facebook has hundreds of elaborately crafted editorial policies enforced by tens of thousands of editors. Other services within the bill’s definition, such as search engines and anti-threat software, absolutely deploy “highly selective” content moderation strategies. See, e.g., my search engine bias paper.

17. “There is no common theme in a user’s newsfeed on a social media platform—much less the material on a social media platform that a user can view by seeking it out.”

The reference to “newsfeed” betrays that the drafters are only thinking about Facebook, not the hundreds of other regulated entities.

18. “Social media platforms may rank and order user posts to generate more clicks and engagement, but the aggregate of this content does not reflect a real selection to create a theme or message.”

Let’s make a complete list of all of the regulated entities, and then we can test the accuracy of this factual claim. Also, this ignores how many services curate user content into topical classifications, which sound like “themes” to me.

19. “The social media platforms covered by the Act have significant market power within their domains.”

The brief “supports” its factual claims with cites to Thomas’ Knight v. Trump statement, which itself was riddled with factual and legal errors. Looking at the list of likely regulated entities I set forth above, many of them have no market power at all.

Also, a reminder that newspapers’ local monopolies didn’t justify suspending the First Amendment in the Miami Herald v. Tornillo case.

20. “Section 230 further reinforces the reasonableness of treating social media platforms as common carriers….The recipients of this publicly conferred benefit can justifiably be required to serve all comers.”

This statement turns Section 230’s justifications on its head. Section 230 was designed to ensure that Internet services could moderate content and not act like “common carriers,” and the Reno v. ACLU opinion held that Internet services cannot be regulatorily equated to telecom.

21. “An independent ground to uphold the Act is the Supreme Court’s decision in Red Lion Broad. Co. v. FCC, which held that more intrusive government regulation of broadcast carriers was constitutionally permissible”

Red Lion? Seriously? See Reno v. ACLU. Another point that should have been cut.

22. “Post-prioritization and shadow banning are not protected speech at all”

The law defines post-prioritization as an “action by a social media platform to place, feature, or prioritize certain content or material ahead of, below, or in a more or less prominent position than others in a newsfeed, a feed, a view, or in search results.” Now, would the following constitute First Amendment-protected speech:? An “action by a newspaper to place, feature, or prioritize certain content or material ahead of, below, or in a more or less prominent position than others in the newspaper.” Per the state’s “logic,” the Constitution allows governments to dictate which newspaper stories belong on the front page and which should be relegated to page D14.

Similarly with respect to the “shadow ban” definition, as reinterpreted to newspapers: an “action by a newspaper, through any means, whether the action is determined by a natural person or an algorithm, to limit or eliminate the exposure of an author or content or material provided by an author.” Florida apparently believes a newspaper’s decision of which authors to publish or reject isn’t protected speech…?

23. “the Act’s ‘regulations are broad based, applying to almost all [social media platforms] in the country'”…”a social media platform has a ‘monopolistic opportunity to shut out some speakers'”

The state has gotten itself into a logic pretzel, trying to show that multiple entities are all monopolistic and have market power against each other. Indeed, as I explained above, few if any of the regulated entities have market power or are monopolies.

24. The brief gives examples where social media services publicly admitted that they made content moderation errors, such as Facebook and Twitter blocking Babylon Bee satirical stories.

Mistakes happen to all publishers, so it’s not a gotcha to show that Internet services also made mistakes. Indeed, content moderation cannot be done “perfectly,” and penalizing Internet services for those mistakes won’t prevent future mistakes from happening.

25. “the Act on its face is both content and viewpoint neutral”

See my blog post laying out several ways the bill is not content-neutral.

26. “The Act does not prohibit content moderation by social media platforms”

Another lie. See, e.g., point 7 and 15. Other examples: the bill only allows Internet services to update their content moderation standards once a month; any other content moderation during that period is prohibited.

27. “social media platforms can still take appropriate measures if candidates violate the platforms’ content guidelines”

Citation please. (None will be forthcoming because the law doesn’t actually say this).

28. “it is logical to require [social media providers] to treat Floridians consistently with other users”

The law doesn’t actually require this. It requires the services to treat Floridians consistently with each other, but services are not required to treat Floridians the same as non-Floridians. Indeed, Internet services will likely treat Floridians exceptionally because they won’t want to extend the stupid Florida rules to other users. The inevitability that services will treat Floridians differently than non-Floridians highlights the dormant Commerce Clause problem, one the plaintiffs inexplicably chose not to pursue.

29. “The Act’s exception for entities that operate theme parks only applies to a handful of entities, none of which operates a social media platform of significant size.  The narrow exception survives intermediate scrutiny, and in any event should be severed from the rest of the Act if the Court deems it unconstitutional.”

The theme park exception is the most mockable example of the law’s unjustified discrimination between speakers. The state should have just conceded that it was unconstitutional. Trying to defend it undermines the drafters’ credibility.

From the Preliminary Injunction Standards discussion:

30. “The Act does not categorically outlaw content moderation, contrary to the declarants’ repeated assertions. It outlaws only inconsistent content moderation.”

Every content moderation expert cringed when reading these two sentences. Given that content moderation will ALWAYS be inconsistent, this distinction collapses and the law outlaws content moderation.

31. “many of the Act’s requirements closely resemble principles of digital due process long championed by internet and free speech academics and activists” [cite to the EFF brief]

First, thanks to the EFF for giving this freebie to the state. Not helpful. Second, the EFF brief makes clear that many aspects of digital due process are laudatory only when voluntary–and unconstitutional when state-mandated.

Case library