Comments on the Oversight Board’s Decision Regarding Trump’s Facebook Account

Today, the Oversight Board issued its decision regarding Facebook’s decision to suspend Trump’s account for two posts Trump made during the January 6 insurrection. The decision covers a lot of ground (it’s nearly 12k words), and I’ll only cover part of it. The decision has three main points:

  • Trump violated Facebook’s rules.
  • Facebook was justified to suspend Trump’s account, but it was not justified to suspend it for an indefinite period.
  • Facebook should publicly clarify several policies.

The “holding” is that Facebook must come back to the Board within 6 months and better justify its remedy. (Why did the Board give Facebook so much time?) The Board explains:

In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.

I expect Facebook will address the Board’s critique by converting the indefinite suspension into a permanent termination. The Board gave Facebook a roadmap of how to do so, noting that Facebook had flagged many of Trump’s prior posts but didn’t issue associated strikes. Facebook can regrade those posts, issue strikes, and then terminate Trump’s account for having too many strikes.

I expect most media coverage will focus on the Board’s validation that Trump violated Facebook’s rules, but I personally find the Board’s discussion about the remedial consequences more noteworthy. The decision says some interesting things about how Facebook (and other Internet services) should address any violations of its rules. I did a deep dive on that issue in my Content Moderation Remedies paper.

Overview of the Decision

Trump Violated Facebook’s Rules. The Board, apparently unanimously, agreed with Facebook that Trump’s posts during the January 6 insurrection violated Facebook’s community standards:

Facebook’s Community Standard on Dangerous Individuals and Organizations says that users should not post content “expressing support or praise for groups, leaders, or individuals involved in” violating events. Facebook designated the storming of the Capitol as a “violating event” and noted that it interprets violating events to include designated “violent” events.

At the time the posts were made, the violence at the Capitol was underway. Both posts praised or supported people who were engaged in violence. The words “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second post amounted to praise or support of the individuals involved in the violence and the events at the Capitol that day…

The Board finds that the two posts severely violated Facebook policies and concludes that Facebook was justified in restricting the account and page on January 6 and 7….Given the circumstances, restricting Mr. Trump’s access to Facebook and Instagram past January 6 and 7 struck an appropriate balance in light of the continuing risk of violence and disruption.

The Board noted that Trump also may have violated Standard on Violence and Incitement, but Facebook didn’t rely on that standard and neither does the Board. However, a minority of the board would have applied this standard and thinks Trump violated Facebook’s dignity value as well.

Facebook’s “Indefinite Suspension” Remedy Isn’t OK. The Board indicates that Facebook’s short-term suspension of Trump’s account was justified in light of the harm Trump caused at the insurrection and the risks of further harm after the insurrection. However, the Board takes issue with Facebook’s use of an indefinite suspension:

Facebook’s imposition of an “indefinite” restriction is vague and uncertain. “Indefinite” restrictions are not described in the Community Standards and it is unclear what standards would trigger this penalty or what standards will be employed to maintain or remove it. Facebook provided no information of any prior imposition of indefinite suspensions in any other cases. The Board recognizes the necessity of some discretion on Facebook’s part to suspend accounts in urgent situations like that of January, but users cannot be left in a state of uncertainty for an indefinite time….The Board finds that it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

I think the Board is right to call out Facebook for using an “indefinite” suspension, but I think Facebook could take the position that the suspension was pending the Board’s review. (It may be easier for Facebook to restore suspended accounts than terminated accounts). Facebook can address the Board’s concerns by converting the indefinite suspension into a permanent termination if it can justify that outcome. That’s probably what Facebook should have done in the first place.

The Board gives Facebook some guidance about how to craft remedies for user violations:

Facebook should use less restrictive measures to address potentially harmful speech and protect the rights of others before resorting to content removal and account restriction….This penalty must be based on the gravity of the violation and the prospect of future harm. It must also be consistent with Facebook’s rules for severe violations, which must, in turn, be clear, necessary and proportionate.

In my Content Moderation Remedies paper, I agree with both of these points.

Recommendations to Facebook. The Board made several additional policy recommendations to Facebook, including:

  • “Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users….If a head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a period sufficient to protect against imminent harm. Suspension periods should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.”
  • “Undertake a comprehensive review of Facebook’s potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6. This should be an open reflection on the design and policy choices that Facebook has made that may allow its platform to be abused.”

Implications

Content moderation decisions can’t please everyone. As I’ve repeatedly discussed, content moderation is a zero-sum game. Someone gets the desired outcome, someone else doesn’t. That’s especially true with respect to Trump’s social media accounts, which have been polarizing (and a nonstop source of blog fodder). Indeed, Trump’s Twitter account spurred diametrically opposed lawsuits against Twitter. One lawsuit sued Twitter for not removing it, one lawsuit sued Twitter for removing it. Isn’t this the quintessential no-win situation? Every decision will make some people unhappy.

Is Facebook getting good ROI from the Oversight Board? In an attempt to overcome the no-win nature of content moderation decisions, Facebook invested $100M+ into the Oversight Board with the hope that the Board, instead of Facebook, would take the heat from the folks unhappy with the zero-sum outcome. Today’s decision–a high-profile account where people are guaranteed to complain about the results–is exactly the scenario that Facebook thought was worth all that money. If you don’t like Trump’s Facebook account being gone, Facebook wants you to yell at the Oversight Board and not it.

I’ve never understood Facebook’s choices because I don’t think it can achieve its desired result. The haters will still hate Facebook. Worse, if the Oversight Board isn’t viewed as credible, then Facebook’s honoring of their decisions hurts Facebook’s reputation. Cognizant of the credibility concerns, the Oversight Board has taken many steps to demonstrate its independence from Facebook, including lobbing many criticisms of Facebook (in this decision and elsewhere). But when the Oversight Board criticizes Facebook–a necessary precondition to the Oversight Board gaining its own credibility–then it gives more fuel to the Facebook haters. For example, the Board unambiguously criticized Facebook for making up an off-the-book sui generis remedy and applying it to Trump’s account. For those who think Facebook was out to get Trump, this decision stokes their anger.

Thus, I don’t see how Facebook solves any of its problems via the Oversight Board. What Facebook really wants is a credible and completely independent body that only has fawning praise for Facebook’s decisions. Given that’s not possible, Facebook will either get a $100M+ shill, or it has empowered an independent body to publicly highlight its mistakes. So Facebook got a win here in having the Board endorse its decision that Trump broke the rules, but I think the decision is ultimately a net loss for Facebook.

Because this issue isn’t resolved, it will flare back up. Facebook has to come back to the Board to justify its remedies within the next 6 months. This ensures another round of global media coverage over these issues and will spur the haters to once again publicly declare why they hate Facebook. To me, that sounds like a bad outcome; but maybe Facebook and the Oversight Board see any press coverage as a net win.

Don’t overlook remedies. It’s not sexy for the Oversight Board to focus on  remedies, but remedies are critical to the legitimacy of any governance system–and all too often, they are undertheorized compared to the substantive rules. I hope Facebook will follow the Oversight Board’s instruction to think more systemically about how remedies fit into its overall content moderation scheme. I liked the Board’s guidance, but I think there’s more to this story as I explain in my big paper.

Case citation: Case decision 2021-001-FB-FBR, Oversight Board, May 5, 2021