Florida Hits a New Censorial Low in Internet Regulation (Comments on SB 7072)
This blog post reviews Florida’s Transparency in Technology Act, SB 7072. Like other recent efforts to censor the Internet (such as Trump’s anti-230 EO), this law is performative garbage. It was never a serious attempt at crafting good policy. Florida enacted it anyway. Now, given a pending complaint from NetChoice and CCIA challenging the law, we’ll find out what happens when the barking dog catches the car it’s been chasing.
What the Bill Does
- Section 1 contains legislative “findings” (mostly #MAGA nonsense).
- Section 2 restricts Internet services from “deplatforming” political candidates.
- Section 3 restricts state purchases from Internet services that a government enforcer has accused of antitrust violations (as well as any “affiliated” businesses owned by various VIPs associated with those Internet services). Any other business that actually violates antitrust law can still freely transact with the state, but an Internet service can be blacklisted when it’s merely accused of violating antitrust law; and an “affiliated” business that never violated antitrust law at all still gets blacklisted if it has the requisite ownership relationship.
- Section 4 contains 10+ restrictions on Internet services’ editorial practices. Examples: a requirement that services moderate content “consistently,” a restriction on amending the service’s editorial standards more than 1x every 30 days, and a categorical ban on blocking content from “journalistic enterprises.” Some of these requirements are backed by a private right of action with statutory damages.
A lot of the media coverage has focused only on Section 2. While Section 2 is obviously unconstitutional, Section 4 is the law’s real payload because it regulates the heart of Internet services’ editorial operations.
[Note: The remainder of this post will trash the law. Doing so brings me no joy. The law was never designed for scrutiny outside the #MAGA community.]
The Law’s Many Discriminatory Classifications
Laws that restrict speech have to tread cautiously with any regulatory distinctions between groups or entities. Otherwise, the distinctions may provide evidence that the law engages in impermissible content-, speaker-, or viewpoint-based discrimination. This law is riddled with such distinctions.
The law doesn’t expressly engage in viewpoint discrimination, but key legislators and Gov. DeSantis did not attempt to hide their viewpoint discrimination, repeatedly expressing antipathy for the “leftist media” (the NetChoice/CCIA complaint provides ample evidence of this). Every judge should account for this undeniable partisan animus in the constitutionality analysis.
Some of the content- and speaker-based distinctions in the law:
Social Media Platforms vs. Other Media Enterprises. The law deceptively defines “social media platforms” as:
any information service, system, Internet search engine, or access software provider that: 1. Provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site
This partially borrows language from Section 230, but none of these terms are further defined in this law. What is a “system”? What is an “Internet platform”? What is an “access software provider”? (Section 230 defines it as filtering or anti-threat software).
Counterintuitively, the law treats search engines and “access software providers” the same as “social media sites.” So this law isn’t just about sticking it to Facebook or Twitter. To remind you of the law’s sweeping nature, I’ll use the term “Internet services” instead of the misnomer “social media providers.”
The law’s legislative “findings” attempt to explain why it treats Internet services differently than other media. A sample “finding”: “Social media platforms hold a unique place in preserving first amendment protections for all Floridians and should be treated similarly to common carriers,” as if invoking the amorphous phrase “common carrier” provides a get-out-of-the-First-Amendment-free card. The other findings are also mockable. Plus, the 1997 Supreme Court ruling in Reno v. ACLU says that, unlike broadcast and telephony, the precedent cases “provide no basis for qualifying the level of First Amendment scrutiny that should be applied to the Internet.” Florida’s attempt to regulate Internet services like telephony is clearly unconstitutional.
Big vs. Small Services. Like many other bills to regulate the Internet, the law distinguishes between big and small Internet services. I will soon post an essay by Jess Miers and me discussing how to draft these statutory distinctions. Currently, legislatures are doing a terrible job drafting them, and Florida’s law is no exception.
The law applies to Internet services that do business in Florida and have (1) annual revenues of $100M+ or (2) 100M+ “monthly individual platform participants globally.” I didn’t see any legislative findings justifying the $100M cutoff. The term “monthly individual platform participants globally” is nonsensical gibberish. The term does not mean “accounts” or “MAUs” because both terms are used elsewhere….so what is an individual platform participant? No clue.
Using two alternative measurement standards (revenues OR users) is a disfavored practice. It means the law governs non-profits with minimal revenues who cannot afford the law’s compliance costs, such as Wikipedia or Internet Archive. It also reaches companies with trivial connections to Florida, such as a company with $100M in revenues globally where only $1 comes from Florida; or a service that has 100M users but only 1 in Florida.
By not connecting the quantitative metrics to Florida activity, the legislative “findings” look silly. For example, a large Internet service with minimal Floridian connections does not “hold a unique place in preserving first amendment protections for all Floridians.” Due to the shoddy drafting of the quantitative thresholds, I don’t think the Florida legislature can defend its size-based classifications. (Note: many laws contain size-based distinctions that are constitutional, but speech restrictions get more stringent judicial review).
Preferential Treatment for Theme Park Owners. The law excludes theme park owners from the definition of Internet services. Yes, the exclusion is as ridiculous as it sounds. No, it is not constitutionally defensible.
Preferential Treatment for Journalistic Enterprises. The law defines journalistic enterprises based on content volume or audience size. It doesn’t matter if they actually publish journalism. For example, journalistic enterprises include anyone who does business in Florida and publishes 100k+ words online [note: I probably blog over 100k words/year; and virtually every law professor has published over 100k words because most law review articles are 20k+ words] and has either 50k+ paid subscribers or 100k+ MAUs. Given the ridiculously low word count threshold, which picks up any company of any size at all, the law effectively says EVERY website with 100k+ MAUs qualifies as a “journalistic enterprise.” (In the Goldman/Miers essay, we’ll also explain that how MAUs is ambiguous because it doesn’t have a uniform definition). This thoughtless drafting creates a massive number of false positives.
The law says that Internet services cannot “censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.” It doesn’t matter if the content is defamatory, harmful to readers, or creates other tortious liability–it must stay. This also means that any decent-sized corporation with the requisite word count/MAUs can freely spam, and Internet services can’t do a damn thing about it.
Even if the law were properly drafted–and it clearly is not–so that it applied only to media enterprises (see my recent blog post on this topic), there is no constitutional basis to privilege their content over the content from other presumptively trustworthy sources, such as other state governments. And to the extent that it privileges Floridian journalistic enterprises over non-Floridian journalistic enterprises, it violates several constitutional provisions.
Plus, as a must-carry law, the law essentially requires one media (Internet services) to carry the content of other, possibly rival, media entities. Outside of the essential facilities, can you think of other analogous statutory obligations for media entities?
Because a legislature can’t impose must-carry rules on Internet services to favor other media entities, and because the drafting is so botched that the definition privileges far more than just media entities, this distinction cannot survive constitutional scrutiny.
Preferential Treatment for Political Candidates. The law privileges political candidates in several ways.
First, the law says Internet services can’t “willfully” deplatform political candidates, and the services must create a mechanism for political candidates to self-identify. Noncompliance leads to stiff fines.
There’s no doubt the anti-deplatforming rule is unconstitutional. In Miami Herald v. Tornillo, the Supreme Court struck down a Florida law requiring newspapers to give equal time to political candidates. This isn’t a close call.
Second, the law says Internet services may “not apply or use post-prioritization or shadow banning algorithms for content and material posted by or about a user who is known by the social media platform to be a candidate” (but ads are OK). As the NetChoice/CCIA complaint makes clear, this provision doesn’t just privilege content FROM political candidates, it it privileges content ABOUT political candidates. That reshapes political discourse in ways I suspect the legislature didn’t properly contemplate. For example, it may render Internet services powerless to clean up outright lies about political candidates (from opponents or malefactors). The resulting widespread and malicious misinformation could hugely damage our democracy.
There’s also the technical impossibility of Internet services recognizing every time users are talking about a political candidate. Dumb word filters won’t be enough. Candidates can be referenced in memes/GIFs, using euphemisms/nicknames, and many other ways that will thwart automated implementation.
Third, the law says any Internet service “that willfully provides free advertising for a candidate must inform the candidate of such in-kind contribution.” I don’t understand what prompted this provision. Are any services currently handing out free ads to candidates, or is this kiboshing some paranoid dystopian hypothetical scenario that isn’t part of our current reality? Further, the law says it’s not free ads when the services organically present candidates’ content, but only if that content is shown “in the same or similar way as other users’ posts, content, material, and comments.” So the law treats organic content as “free ads” if it’s not shown “the same or a similar way”? How will that be measured?
Fundamentally, equal treatment for political candidates imposes false equivalencies. Political parties routinely espouse ideas that are not credible or legitimate. You never hear about these parties because they are fringe by nature and outside the traditional media spotlight. By treating all political candidates equally, fringe or not, the law would vastly elevate and normalize pernicious and non-credible ideas that don’t deserve such treatment.
Some other problems with the law’s political candidate privileges:
- Political candidates aggressively hustle for votes and contributions. The law makes it difficult or impossible for Internet services to put the brakes on this.
- Other than the old Fairness Doctrine, are there other constitutionally permitted laws that require businesses to authenticate political candidates and give them preferential treatment compared to other citizens? FWIW, there are likely constitutional limits on states requiring Internet services to authenticate users. See Backpage v. McKenna.
- If an “access software provider” assesses a political candidate’s content or website as a security threat, does the law require them to ignore that? If so, imagine how malefactors could weaponize a guaranteed free pass around anti-virus/anti-malware software.
- If an Internet service wants to create a politics-free zone–say, it’s a service for knitters where political debates will crack the community into two, or a service that channels political discourse into designated spaces–the law says “tough shit.”
- if a political candidate’s content breaks the law or the service’s house rules, the law apparently says “tough shit.”
Preferential Treatment of Advertising. The “journalistic endeavors” restrictions and restrictions on post-prioritization/shadowbanning political candidates both exclude advertising from their regulatory scope–in other words, those provisions privilege advertising over editorial content. I understand the goal was to allow Internet services to keep selling ads, but from a constitutional standpoint, the distinction is backwards. Normally, advertising gets a lower level of constitutional protection than editorial content. Worse, these distinctions say it’s fine for Internet services to be paid to shape public opinion, but it’s not OK to shape public opinion gratis when it’s the Internet services’ own editorial voice. That seems to undermine the legislature’s concerns on how public opinion is shaped; the law says shaping isn’t a problem–just make sure someone’s paying you to do it.
Differential Treatment for Obscene Content. The law requires Internet services to notify users in the event their content is “censored” or “shadowbanned,” unless the content is “obscene,” in which case no notice is required. Huh? Why is obscene content treated differently than other categories of constitutionally unprotected, illegal, or harmful content? ¯\_(ツ)_/¯
More Comments on Section 4
Section 4 is jam-packed with more terrible and unconstitutional policy ideas, including:
- Internet services must “publish the standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban.” How much disclosure will suffice? Every Internet service already has a TOS with behavioral restrictions. Exactly how would a search engine or “access software provider” satisfy this requirement?
- Internet services must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” Internet services strive for consistency, but perfect consistency isn’t achievable. Content moderation is hard, especially at scale. Consistency is even more illogical for search engines, especially when search engines provide personalized search results that, by definition, don’t seek to provide the same results to searchers. The law provides a private right of action for inconsistent content moderation that includes statutory damages of up to $100k. I don’t think the court system is ready for eager Floridian to tell Internet services to show them the money for not treating their content “consistently.”
- Internet services must “inform each user about any changes to its user rules, terms, and agreements before implementing the changes and may not make changes more than once every 30 days.” Can you imagine a legislature telling a newspaper or a book publisher that it can change its editorial standards only once a month? In fact, sometimes Internet services must make rules on the fly to deal with exigencies, such as unexpected domestic insurrections or life-or-death COVID19 developments.
- Internet services may “not censor or shadow ban a user’s content or material or deplatform a user from the social media platform…[w]ithout notifying the user who posted or attempted to post the content or material.” The notice must include “a thorough rationale explaining the reason that the social media platform censored the user [and] a precise and thorough explanation of how the social media platform became aware of the censored content or material, including a thorough explanation of the algorithms used, if any, to identify or flag the user’s content or material as objectionable.” However, as mentioned above, no notice is required when the material is “obscene.” The requirement of notice and explanation sounds attractive to many people who might oppose other parts of this law, but it’s flatly inconsistent with operations at scale. How exactly is this notice and explanation process supposed to work with the billions of items a day that are squashed as spam? The notice requirement is also coupled with a private right of action, so high-volume spammers who don’t get proper notice will view the damages from Internet services as their primary revenue source.
- Internet services must “[p]rovide a mechanism that allows a user to request the number of other individual platform participants who were provided or shown the user’s content or posts [and provide], upon request, a user with the number of other individual platform participants who were provided or shown content or posts.” This is basically a mandatory analytics provision, but services might reasonably want to charge for providing analytics. Furthermore, many regulators have demanded that Internet services make themselves less “addictive” (such as efforts to deemphasize or ban “likes”). Mandatory readership stats may counterproductively increase service addictiveness.
- Internet services must “[c]ategorize algorithms used for post-prioritization and shadow banning [and allow] a user to opt out of post-prioritization and shadow banning algorithm categories to allow sequential or chronological posts and content.” They also have to tell users every year about about algorithm usage and users’ ability to opt-out. I have no idea what it means to “categorize algorithms.” I have strong objections to legislatures telling Internet services how to order or sort their content, even just as a user option. This would be like a legislature telling newspapers that they have to print an edition of the paper that displays stories by time finished by the reporters. Not only would such an edition be costly to develop and maintain, but it would be useless to readers. The sequential/chronological version of any service will be a lot like ChatRoulette or Omegle. You would expect to find dick pics more frequently than any content you actually want. This provision is flatly unconstitutional.
- Internet services must “allow a user who has been deplatformed to access or retrieve all of the user’s information, content, material, and data for at least 60 days.” What is the user’s content, and how does it intersect with the content of other users? What are the privacy implications? If the content is illegal (say, copyright infringing or CSAM), does the law punish Internet services for not returning it to the user?
A reminder that many Internet services are currently free to use. With that in mind, which of these Section 4 obligations can Internet services charge users to receive? Does it all have to be free? If not, it’s easy to imagine some countermoves that Internet services could make to the disadvantage of Florida users.
On behalf of their members, NetChoice and CCIA filed a widely anticipated challenge to the law. The complaint emphasizes the law’s constitutional problems: “The Act is so rife with fundamental infirmities that it appears to have been enacted without any regard for the Constitution.” The complaint also says:
Rather than preventing what it calls ‘censorship,’ the Act does the exact opposite: it empowers government officials in Florida to police the protected editorial judgment of online businesses that the State disfavors and whose perceived political viewpoints it wishes to punish.
The complaint also raises Section 230 and dormant commerce clause concerns.
The complaint has a remarkable 16 lawyers on its caption. The message is clear: the Internet services will spare no expense.
Florida voters, if your elected officials voted yes on this law, you deserve better. FIX THAT in the next election.
Many of the Section 4 restrictions resemble the Santa Clara Principles. If you’ve been championing “digital due process,” as part of your vision of “platform governance,” congratulations! The Florida legislature heard your pleas. I assume you’re now taking a victory lap… Or, perhaps you are feeling a little queasy because legislative compulsion of “digital due process” actually looks really censorial? A perhaps obvious point: when mandated by the government, “platform governance” is pure censorship; and if you think the best way to fix Internet service “censorship” is to give the ***government*** more power to control speech, you are doing it very, very wrong.
This law is another reminder why I have categorically opposed all state-level efforts to regulate the Internet. Because of the lack of geographic boundaries to the Internet, they are the wrong level of government to set the rules for a borderless electronic network. More importantly, state legislatures have systematic defects when it comes to Internet policy-making, such as the lack of technical expertise, parochialism towards non-constituents, zero fucks about the constitutionality of their laws, and as evidenced here, a willingness to champion #MAGA fealty at the expense of the best interests of their state residents. This law signals the kind of regulatory garbage that will be ubiquitous when Section 230 (after Congressional gutting) no longer holds back state legislatures. If that doesn’t horrify you, then you and I probably don’t have a shared vision about what makes America great.