Politician-Operated Social Media Accounts Raise Many Thorny Legal Issues

In February, Justice Kagan joked that the Supreme Court justices “are not the nine greatest experts on the Internet.” That is certainly true–for example, the justices cannot publicly engage in ordinary social media interactions–yet the justices are getting a crash course on the Internet whether they want it or not. Their docket this term included:

  • Gonzalez v. Google about Section 230
  • Twitter v. Taamneh about the Anti-Terrorism Act
  • Counterman v. Colorado about the definition of true threats online
  • 303 Creative LLC v. Elenis about whether web designers can freely reject prospective customers

I expect the Supreme Court will ultimately take the 5th and 11th Circuit appeals in the Texas (NetChoice v. Paxton) and Florida (NetChoice v. Moody) social media censorship cases as well, though the arguments will roll to next term.

On top of that docket, the Supreme Court recently took two more cases, Garnier v. O’Connor-Ratcliff (from the 9th Circuit) and Lindke v. Freed (from the 6th Circuit). Both cases involve government officials who blocked constituents at their social media accounts, but the circuits reached opposite results: the 9th Circuit found impermissible censorship, while the 6th Circuit did not. To me, the 6th Circuit opinion needs correction by the Supreme Court. In the interim, it has paralyzed lower courts, e.g., Fox v. Faison, 2023 WL 2763130 (M.D. Tenn. April 4, 2023).

Even if the Sixth Circuit mistake gets fixed, these cases–like all of the Internet Law cases–have a non-trivial risk of going sideways in hugely problematic ways. In particular, the Supreme Court will be invited to opine on what constitutes state action online, and this could cross over to (unrelated, IMO) questions about whether social media services are or become state actors. Anything the Supreme Court says on that topic, other than a categorical rejection of the principle, will ignite litigation and regulation like we’ve never seen before.

There is a bit of irony to the Supreme Court granting certiorari in these two cases, because the Supreme Court had previously accepted a case on the same topic, the Knight First Amendment v. Trump case. The Second Circuit’s opinion found that Pres. Trump engaged in unconstitutional censorship by capriciously blocking users at his Twitter account. It was a powerful ruling, but the Supreme Court vacated it by granting cert; and then when the Supreme Court dismissed the case as moot because Trump was no longer president, it left a vacuum. I have lost track of the number of cases I’ve seen involving social media blocks by government officials, but the cases are voluminous. The Supreme Court’s opinion will affect dozens or hundred of pending lawsuits.

It’s great that the Supreme Court granted cert in two companion cases because this will give the Supreme Court more relevant facts to inform its holdings. In theory, this could lead to clean and persuasive rulings that provide a lot of guidance to the lower courts. In practice, the opinions are unlikely to resolve all of the issues in play because there is wide factual variation among the cases, and two Supreme Court opinions cannot address the full range of facts.

In February, I spoke at a municipal law conference where I outlined some of factual complexities that make it hard to compare cases. Some of the ways cases can be taxonomized based on how the accountholder uses the social media accounts (all of these taxonomies are not meant to be complete):

  • completely personal usage. In general, these accounts should not be treated as state action. However, even posts to these accounts may still have legal effects for government employees, such as when their posts betray a personal bias that’s inconsistent with the job. The most obvious example is law enforcement officers who post racist content to their social media accounts and therefore lose their ability to testify credibly at trial.
    • A recent case in this genre is Marlak v. Conn. Dept. of Corrections, 2023 WL 1474622 (D. Conn. Feb. 2, 2023). Marlak worked as a correctional officer. He allegedly posted a meme to his private Facebook account depicting 5 men being hung and labeled “Islamic Wind Chimes.” His employer terminated his employment because his “personal use of social media has undermined the public’s confidence in your ability to function in your position. The type of speech posted threatens the safety of staff and inmates who are Muslim.” His wrongful termination lawsuit partially survived a motion to dismiss.
    • See also In the Matter of Wayne Pearson, Bayside State Prison, Dept. of Corrections, 2023 WL 33118862023 WL 3311886 (N.J. App. Div. May 9, 2023). The court upheld the firing of a correctional officer who posted seemingly racist comments on his personal social media page.
  • usage only for political campaigning purposes. Courts have been inclined to treat those as not state action, in part due to the constitutional deference to political campaign content.
  • an account that existed at the time the accountholder became a government employee that mixes professional and personal content.
  • an account that the government employee newly creates in connection with the official role.
  • an account set up by a government organization.

[Note: Colorado just passed a law, HB 23-1306, which declares that an elected official is running “private social media” unless it’s supported by government resources or is required by law. I suspect the constitutionality of that law will be in play with the Garnier and Lindke cases, even if the statute itself isn’t on the docket. The rule appears overinclusive because it lets government officials create accounts that look official and contribute to the official’s overall public profile, but say they were managing them on their personal time and thereby get away with rampant censorship.]

Cases can also be taxonomized based on the type of restriction deployed by the accountholder (and the technical capabilities can vary by service; and services can change their functionality over time):

  • ban user
  • remove content
  • deploy keyword filters.
    • See, e.g., PETA v. Tabek, 1:21-cv-02380-BAH (D.D.C. March 31, 2023). The judge allowed NIH to moderate “off-topic” Facebook comments using keyword filters that included the following terms: PETA, PETALatino, Suomi, Harlow, Animal(s), animales, animalitos, Cats, gatos, Chimpanzee(s), chimp(s), Hamster(s), Marmoset(s), Monkey(s), “monkies”, Mouse, mice, Primate(s),  Sex experiments, Cruel, cruelty, Revolting, Torment(ing), Torture(s), torturing. To me, it seems beyond debate that NIH adopted these blocked keywords to target PETA content, and that such targeting has significant collateral damage (every instance of the word “cats” is blocked???). The Instagram blocks were even worse: “NIH’s custom keyword filter on Instagram contains fewer than thirty blocked keywords, nearly all related to animal testing” (including the word “stop”–really?). NIH should lose this case, but the court says “The comment threads at issue are limited public fora: virtual spaces opened by the government to the public for the purpose of the discussion of only certain subjects.” Every government actor can claim that they intend the online conversation to reach limited topics, i.e., the ones they like. So if this is the standard, that government filtering of the words “cat” and “stop” triggers only limited public forum analysis, what can’t the government do to censor speech in virtual spaces? (I’m putting aside the obvious problem that there may be circumstances where “PETA” may be on-topic, but those will be blocked too). The court is right about one thing: “if the standard is perfectly consistent enforcement, it is hard to imagine any social media commenting policy that would survive the test of reasonableness without severely throttling the public’s ex ante access to the forum.” This is why we will end up with broadcast-only social media accounts; but the alternative–selectively censored virtual forums–is a far worse outcome IMO.
  • services independently deploy their standard content moderation to government-operated accounts. This is the issue no one wants to address because it could be interpreted as the government delegating authority to the private actor, which raises problematic issues for the state action doctrine.

These different restrictions can have a variety of effects, including:

  • preventing the making of some individual public posts
  • preventing the making of all future public posts
  • preventing the sending of private messages to the government, such as petitioning activity or feedback
  • preventing reading of official government announcements
  • preventing reading of peers’ comments

[For other ideas, see my Content Moderation Remedies paper.]

Finally, cases can be taxonomized based on the accountholder’s content moderation policies, including:

  • accountholder is acting retaliatorily
  • accountholder has no policy, makes ad hoc decisions
  • accountholder has a written policy that’s obviously constitutionally problematic
  • accountholder has a written policy that is facially neutral but is misapplying the policy for non-retaliatory reasons
  • accountholder has a written policy that is facially neutral and being applied neutrally

Putting all of these options together into a three-dimensional matrix, it’s clear that there are so many possible configurations that the Supreme Court cannot possibly anticipate or address them all. That ensures that this genre of cases will keep showing up at the Supreme Court.

Some additional implications:

* any rules have to assume that political figures will respond to public criticism with a thin-skin and retaliatory intent. We see this OVER and OVER again. See, e.g., Biedermann v. Ehrhart, 2023 WL 2394557 (N.D. Ga. March 7, 2023) (state representative blocked over 60 constituents–so many that the constituents created a club, #BlockedByGinny); Faskin v. Merrill, 2023 WL 149048 (M.D. Ala. Jan. 10, 2023) (“Defendant stipulates…that he blocked Plaintiffs from the @JohnJMerrill Twitter account because they posted tweets that were directed at him and that concerned election law, criticized him, or included comments with which he disagrees.”).

* any rules must ensure that government officials cannot broadcast propaganda without constituent fact-checking unless the media makes it clear that it’s broadcast-only. Government officials can’t be allowed to selectively allow “fact-checking” only when it suits their interests.

* As I’ve mentioned repeatedly, if governments deploy any content moderation efforts, each and every decision has the potential to trigger constitutional litigation for intervening too much or not enough. That is not a sustainable option. This litigation risk pushes government accountholders to treat social media as broadcast-only and disregard the “social” features of “social” media. See Cooper-Keel v. State, 2023 WL 3991842 (W.D. Mich. June 14, 2023) (court system turned off comments on social media due to the content moderation challenges).

Related posts: