Plaintiffs Request Preliminary Injunction Against Florida’s Censorship Law (SB 7072)–NetChoice v. Moody
The Brief. The brief’s summarizes the Constitutional reasons why the law should fail:
Disguised as an attack on “censorship” and “unfairness,” the Act in fact mounts a frontal attack on the targeted companies’ core First Amendment right to engage in “editorial control and judgment.” Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 258 (1974). The Act imposes a slew of content-, speaker-, and viewpoint-based requirements that significantly limit those companies’ right and ability to make content-moderation choices that protect the services and their users—and that make their services useful. The State has no legitimate interest, much less a compelling one, in bringing about this unprecedented result. Moreover, the law is anything but narrowly tailored: its blunderbuss restrictions do nothing to protect consumers or prevent deceptive practices, but instead throw open the door to fraudsters, spammers, and other bad actors who flood online services with abusive material. In short, the Act runs afoul of the basic First Amendment rule prohibiting the government from “[c]ompelling editors or publishers to publish that which reason tells them should not be published.”
Surprisingly, the plaintiffs don’t advance a Dormant Commerce Clause argument. The complaint, and several of the declarations, tee that issue up, but the brief itself ignores it. The DCC has helped wipe out numerous state regulations of the Internet, including the baby CDA laws of the late 1990s (see the flagship case, ALA v. Pataki), the anti-Backpage state laws, and a CA law mandating some privacy opt-outs. I don’t understand why this issue ended up on the cutting room floor.
Schruers’ Declaration. This declaration focuses on how to operationalize content moderation and the consequences of limiting Internet service editorial discretion. The declaration says: “content moderation efforts serve at least three distinct vital functions”:
- “moderation is an important way that some online services express themselves and effectuate their community standards, thereby delivering on commitments that they have made to their communities”
- “moderating content is often a matter of ensuring online safety”
- “moderation facilitates the organization of content, rendering an online service more useful”
As I’ve repeatedly said, content moderation can’t be done perfectly. The declaration emphasizes this: “For certain pieces of content, there is simply no right answer as to whether and how to moderate, and any decision holds significant consequences for the service’s online environment, its user community, and the public at large.”
Szabo Declaration. This declaration focuses on how restricted editorial discretion hurts Internet services, especially with respect to advertisers.
Veitch Declaration (YouTube). Some statistics:
16. In the first quarter of 2021, YouTube removed 9,569,641 videos that violated the Community Guidelines. The vast majority-9,091,315, or 95% of the total removals-were automatically flagged for moderation by YouTube’s algorithms and removed based on human confirmation of a violation. Less than 5%–478,326 videos-were removed based on initial flags by a user or other human flagger. This removal system is highly efficient: the majority of removed videos were removed before accumulating more than 10 views. In Q1 2021, 53% of the videos removed were due to child safety issues.
17. YouTube also removed over 1 billion comments in the first quarter of 2021, 99.4% of which were flagged for moderation by YouTube’s automated systems. In Q1 2021, 55.4% of those removed comments were due to spam.
Also, the law’s ban on services revising their editorial policies more frequently than 1x/30 days is stupid and pernicious, as the declaration explains:
21. S.B. 7072’s prohibition on changing rules more than once every 30 days would significantly limit YouTube’s ability to respond in real-time to new and unforeseen trends in dangerous material being uploaded by users, or new legal or regulatory developments. The harms of user-generated content are ever-evolving, and YouTube’s content moderation policies have necessarily had to evolve to address the same. YouTube must be able to react quickly to promote the safety of its users in changing and emerging contexts. In 2020, YouTube updated its policies related to medical misinformation alone more than ten times, which is in line with historical trends. In 2019, YouTube made over 30 updates to its content moderation policies generally-on average, once every 12 days. The same was true in 2018. Limiting YouTube’s ability to update policies, as S.B. 7072 mandates, means that YouTube would be forced to host unanticipated, dangerous, or objectionable content during those windows where the law prohibits YouTube from making any changes to its content policies.
The declaration also explains that “consistency” is impossible, especially during a pandemic:
In response to Covid-19, YouTube took steps to protect the health and safety of our extended workforce and reduced in-office staffing. As a result of reduced human review capacity, YouTube had to choose between limiting enforcement while maintaining a high degree of accuracy, or using automated systems to cast a wider net to remove potentially harmful content quickly but with less accuracy. YouTube chose the latter, despite the risks that automation would lead to over-enforcement–in other words, removing more content that may not violate our policies for the sake of removing more violative content overall. For certain sensitive policy areas, such as violent extremism and child safety, we accepted a lower level of accuracy to ensure the removal of as many pieces of violative content as possible. This also meant that, in these areas specifically, a higher amount of non-violative content was removed. The decision to over-enforce in these policy areas–out of an abundance of caution–led to a more than 3x increase in removals of content that our systems suspected was tied to violent extremism or potentially harmful to children. These included dares, challenges, or other
posted content that may endanger minors.
Potts Declaration (Facebook):
if the Act’s restrictions go into effect, it will, among other things, force Facebook to display, arrange, and prioritize content it would otherwise remove, restrict, or arrange differently; it will chill Facebook’s own speech; it will lead some users and advertisers to use Facebook less or stop use entirely; it will force Facebook to substantially modify the design and operation of its products; it will force Facebook to disclose highly sensitive, business confidential information; and it will impose excessive burdens on Facebook to notify users every time their content is removed, restricted, or labeled.
Rumenap Declaration (Stop Child Predators). More on the stupidity of the 30-day restriction on editorial changes:
This restriction all but guarantees that the online platforms will be hamstringed in responding to new threats to children’s online safety and to new methods of distributing or soliciting photos and videos of child sexual abuse. It will also hinder their ability to adapt to predators’ schemes. As history and experience have shown, predators continue to find a way around existing safeguards, requiring us, the platforms, and the public to remain ever vigilant.
Pavlovic Declaration (Etsy). This declaration focused on the problems that Etsy would face if it were required to host the content of Nazis or other hate groups.
Case library
- Preliminary injunction brief (if you get an error message downloading one of the files below, hit refresh)
- Netchoice v. Moody complaint.
- Text of SB 7072. Blog post on the statute.
Pingback: Amicus Briefs Against Florida's Censorship Law (SB 7072) - Technology & Marketing Law Blog()