Quick Links from the Past Year, Part 8 (Editorial Transparency)

* NY Assembly Bill A7865A. A dangerous new mandatory editorial transparency law to supplement Florida and Texas.

Definition of “Hateful conduct” means “the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” The “incite violence” piece might survive Brandenburg’s standard. The rest is constitutionally protected “hate speech.” For example, vilify usually means to defame, and “group defamation” isn’t really a thing in the US.

The law doesn’t explicitly state that social media services must reduce hateful conduct, but the bill’s intent is obvious. This is reinforced by the statements of bill co-sponsor Gina Sillitti (emphasis added):

Hate has no place in New York State, whether on our streets or online,” Sillitti said. “It’s unconscionable that a white supremacist livestreamed his terrorist attack on the Buffalo community, and that the clips were viewed millions of times. We’ve seen how such heinous footage can embolden other extremists and traumatize unsuspecting viewers. I helped pass legislation to enhance accountability across social networks and ensure users can easily report hateful content. I’d like to thank Senator Kaplan for championing this legislation in the Senate and working collaboratively to stamp out hate wherever it is found in New York.”

To the extent the law pretextually claims to be just about disclosure, but is in fact intentionally designed to suppress constitutionally protected speech, then it should be an easy call to declare the law unconstitutional.

Definition of “social media” is “internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.” In other words, all UGC services.

Social media networks must “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” That’s easy enough to comply with: set up an email alias that goes to a black hole. However, the law then ambiguously says the mechanism “shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” Does this mean the services must provide an explanation to reporting users? Or simply have the ability to do so? The savings clause suggests a response is mandatory, but remember that the 11h Circuit ruling in NetChoice v. Florida Attorney General struck down the explanations requirement.

Social media networks must also publish a policy explaining “how such social media network will respond and address the reports of incidents of hateful conduct on their platform.”

The law’s ambiguous savings clause essentially admits the whole law is unconstitutional: “Nothing in this section shall be construed (a) as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the first amendment to the United States Constitution, or (b) to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report.”

If services choose to comply with this law, they could apparently do so by (1) adding a provision to their TOS saying “we exercise our editorial discretion in deciding if and how to deal with hateful conduct,” (2) creating a black-hole email alias “ReportHatefulConduct@service.com,” and (3) setting up an auto-reply to every incoming report that says “we might do something about your report, we might not. Who knows, really? That’s what editorial discretion means.” Obviously all of this is worthless to everyone, but even forcing this minimal compliance should be more than the state can constitutionally impose. If the law requires services to do anything more than this token response, it should be clearly unconstitutional.

* Twitter, Inc. v. Paxton, 2022 WL 610352 (9th Cir. March 2, 2022). I covered this case (critically) in my Constitutionality of Mandatory Editorial Transparency piece. On the plus side, it’s correct that investigatory targets usually should not be able to judicially preempt administrative investigations before they happen. On the minus side, it’s tone-deaf that Paxton’s investigation in this situation is self-admittedly censorial, and the court leaves Twitter with few good options if Paxton chooses to leave the investigation hanging as a censorial millstone around Twitter’s neck. When an investigation itself has censorial consequences regardless of what happens next, it should be redressable in court.

Some of the key quotes from the opinion:

“Even if content moderation is protected speech, making misrepresentations about content moderation policies is not….misrepresentations are exactly what are prohibited by Texas’s unfair and deceptive trade practices law; this is the very thing that Paxton claims OAG is trying to investigate. And at this stage, OAG hasn’t even alleged that there is a violation; OAG is just trying to look into it….Finding this case ripe would require federal courts in California to determine the constitutionality of Texas’s unfair trade practices law in a hypothetical situation, before Texas has even decided whether its law applies.”

“because Twitter need not comply with the CID, OAG has taken no action that requires immediate compliance. Moreover, any hardship to Twitter is minimized because Twitter may still raise its First Amendment claims before OAG brings an unfair trade practices suit. If OAG moves to enforce the CID, Twitter can raise its First Amendment claims at that time, before any duty to comply applies, and without facing any charges under the underlying Texas unfair business practices statute. Twitter also could have challenged the CID in Texas state court.”

“Twitter has made statements about balance…Twitter’s statements can be investigated as misleading just like the statements of any other business….a reasonable person could think that Twitter’s statements about content moderation were true.” Editorial “balance” or any other statement of editorial consistency or editorial neutrality are obviously puffery, or at least not a provable statement of fact for the many reasons I’ve raised over the years.

“OAG’s investigation is not a system of informal censorship.” The panel makes this naked factual assertion without citation, but why does the panel believe this? Paxton brazenly announced publicly that he would retaliate for Trump’s deplatforming, so the panel’s factual determination seems obviously wrong.

The net effect of this ruling is that an AG can issue a CID to order disclosures about editorial practices and just leave it hanging over the service’s head forever. The obvious path of action for Twitter or any CID recipient is to simply ignore the CID and force AG into court to enforce it. Setting up legal incentives to encourage noncompliance with investigatory demands is bad policy, but it’s still better than censorship-via-CID.

Twitter has filed a petition for rehearing en banc.

* DC v. Meta, Case No. 2021 CA 004450 2 (D.C. Superior Ct. March 9, 2022). The court sets up the dispute:

OAG is investigating whether Meta made any false or misleading public statements about its efforts to enforce its “content moderation policies” prohibiting misinformation about COVID-19 vaccines in Facebook posts. OAG issued an investigative subpoena to Meta that seeks, among other things, the identities of Facebook users that Meta determined violated its content moderation policies for vaccine misinformation through public posts.

Given that vaccine misinformation is often constitutionally protected, this sounds like the path towards censorship, but the court says “great!” Some standout quotes from the opinion:

“Nor is the District trying to “unmask” Facebook users – it is simply seeking information about Meta’s enforcement actions against users who were never masked because they publicly posted content about vaccines using the identities that the District seeks to obtain.” Huh? The subpoena seeks the posters’ identities.

“If either party can be said to be regulating consumer speech, it is Meta through enforcement of its content moderation policies.” Seriously? The court doesn’t understand the difference between government regulation and private editorial activities?

“The District is only at the information-gathering stage of its investigation, and compliance with the subpoena would have no effect whatsoever on Meta’s content moderation polices or how it applies and enforces them. The District does not claim any right to dictate to Meta what content should remain on, or what content should be removed from, Facebook The District represents that it is investigating whether Meta’s public statements concerning enforcements of its content moderation policies comply with the CPPA, not whether these policies are too weak or too strict, and Meta offers no reason to question this representation. Nor would enforcement of the subpoena require Meta to disseminate the District’s preferred message. Meta does not claim a right under the First Amendment or otherwise to disseminate false or misleading information about whether and how it enforces its content moderation policies.”‘ In a footnote, the court adds “The District’s investigation is not targeted at Meta’s exercise of any First Amendment right and in any event, Meta does not suggest that compliance with the subpoena would in fact inhibit it from exercising its right to control its content moderation policies.” This is all demonstrably false. The investigation absolutely sends the message that Facebook needs to change and reprioritize its editorial decisions with respect to constitutionally protected speech.

The court also rejects any users’ First Amendment interests–despite applying exacting scrutiny! “OAG’s subpoena to Meta does not violate the First Amendment rights of Facebook users to express themselves in the on-going public debate on COVID-19 vaccines.”

  • (1) “the District has a compelling interest in investigating a company has made false and misleading statements that violate the CPPA.”
  • (2) “OAG is not seeking information about the identity of all Facebook users who have posted any information about COVID-19 vaccines; OAG is seeking information only about Facebook users who Meta has determined violated its content moderation policies with respect to vaccines (and not its content moderation policies concerning other matters, such as hate speech).” OAG claims it needs the identities of the posters because it believes there are a small number of misinformation posters and it wants to see how Facebook is handling repeat offenders.
  • (3) “these users chose to publicly post the content with their identities, and the District is seeking only the identities that these users themselves employed in their public posts.”
  • (4) “nothing in the record suggests that providing this user-specific information to the District will result in any reprisals against Facebook users who violated Meta’s vaccine-related content moderation policies when they publicly posted the content along with their identities.”

Given millennia of history of governments claiming not to weaponize information against enemies, the court’s acquiescence makes total sense (ha ha). And that makes this statement equally credible: “Meta’ s concerns about a chilling effect on Facebook users who want to post content that is negative or positive about COVID-19 ‘vaccines is speculative.” There is a lot of speculation here, but it’s not coming from Meta.

* Zuru, Inc. v. Glassdoor, Inc.,2022 WL 2712549 (N.D. Cal. July 11, 2022).

Glassdoor “argues that it shouldn’t be required to identify how many people have seen the [allegedly defamatory] reviews, because that information isn’t relevant, is commercially sensitive, and would be impracticable to produce. Glassdoor’s relevancy and commercial-sensitivity arguments aren’t persuasive, but its burden argument is…

Glassdoor’s interest in protecting that information from widespread disclosure can be safeguarded by a protective order, which the parties can stipulate to and which would prevent Zuru from using Glassdoor’s confidential data outside this litigation and the New Zealand defamation case….

The Court won’t require Glassdoor to produce data the company doesn’t have. But Glassdoor presumably does keep some statistics about how many people use its website and view the reviews posted there; and it is possible that some of those statistics could be relevant to Zuru’s claim.”

* Missouri v. Biden, 2022 WL 2825846 (W.D. La. July 12, 2022).

Missouri and Louisiana claim that “Government Defendants have colluded with and/or coerced social media companies to suppress disfavored speakers, viewpoints, and content on social media platforms by labeling the content “disinformation,” “misinformation,” and “malinformation.” Plaintiff States allege the suppression of disfavored speakers, viewpoints, and contents constitutes government action and therefore violates Plaintiff States’ freedom of speech in violation of the First Amendment to the United States Constitution.” Examples of the alleged government censorship: Hunter Biden’s laptop, the lab-leak theory of COVID-19’s origins, the efficacy of masking and lockdowns, and election integrity/voting-by-mail. The states seek to “prohibit Government Defendants from taking steps to demand, urge, encourage, pressure, or otherwise induce any social-media company or platform to censor, suppress, remove, deplatform, suspend, shadow-ban, de-boost, restrict access to content, or take any other adverse action against any speaker, content, or viewpoint expressed on social media.”

The court says the states have standing, distinguishing Hart v. Facebook, AAPS v. Schiff, and Changizi because the states’ “eighty-four-page Complaint sets forth much more detailed allegations and evidence against federal agencies and officials than the cases cited by Government Defendants. If Missouri and Louisiana do not have standing under the facts alleged, when would anyone ever have standing to address these claims?”

The court authorizes discovery that includes: “third party-subpoenas on up to five major social-media platforms seeking the identity of federal officials who have been and are communicating with social-media platforms about disinformation, misinformation, malinformation, and/or any censorship or suppression of speech on social media, including the nature and content of those communications.”