Second Circuit Issues Powerful Section 230 Win to Facebook in “Material Support for Terrorists” Case–Force v. Facebook

In a 101 page set of opinions, the Second Circuit ruled emphatically for Facebook in one of the multitudinous lawsuits alleging that it provided material support to terrorists (in this case, Hamas). The majority relied exclusively on Section 230, in the process absolutely destroying some of the commonly-advanced arguments seeking to eviscerate Section 230. The result is a powerful win for Facebook and for Section 230. It should end any remaining hope for lawsuits against social media providers for materially supporting terrorists.

The majority opinion’s rousing endorsement of Section 230 is diminished a bit by a lengthy and detailed dissent railing against Facebook and Section 230. The majority clearly and persuasively rebutted the dissent’s argument, so it’s possible the dissent will quietly fade into oblivion. However, I’m sure Facebook- and Section 230-haters will find plenty to embrace in the dissent.

Key Points from the Majority Opinion

Section 230 Should Be Read Broadly. “the Circuits are in general agreement that the text of Section 230(c)(1) should be construed broadly in favor of immunity.” Cites to FTC v. LeadClick, Marshall’s Locksmith v. Google, Doe v. Backpage, Jones v. Dirty World, Doe v. MySpace, Almeida v. Amazon, Carafano v. Metrospace, and Zeran v. AOL.

Publisher Claims. “it is well established that Section 230(c)(1) applies not only to defamation claims, where publication is an explicit element, but also to claims where ‘the duty that the plaintiff alleges the defendant violated derives from the defendant’s status or conduct as a publisher or speaker.'”

In this case, “Plaintiffs seek to hold Facebook liable for ‘giving Hamas a forum with which to communicate and for actively bringing Hamas’ message to interested parties.’ But that alleged conduct by Facebook falls within the heartland of what it means to be the “publisher” of information under Section 230(c)(1). So, too, does Facebook’s alleged failure to delete content from Hamas members’ Facebook pages.”

Algorithmic Content/Ad Presentation Is Irrelevant.

The majority and dissent disagree about the implications of Facebook’s algorithmic activities. A couple of years ago, a few folks claimed that algorithmic activity by social media providers was materially different than the conditions prevailing in 1995 when Congress adopted Section 230, and therefore Section 230 should be not be available for these new technological developments. This argument was nonsensical from the outset, and it has fared poorly in courts (see, e.g., the Dyroff ruling), but it does make a sale here with the dissenting judge. Fortunately, the majority absolutely, unequivocally, and persuasively destroys the argument:

We disagree with plaintiffs’ contention that Facebook’s use of algorithms renders it a non‐publisher. First, we find no basis in the ordinary meaning of ‘publisher,’ the other text of Section 230, or decisions interpreting Section 230, for concluding that an interactive computer service is not the ‘publisher’ of third‐party information when it uses tools such as algorithms that are designed to match that information with a consumer’s interests. Indeed, arranging and distributing third‐party information inherently forms “connections” and “matches” among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media. That is an essential result of publishing. Accepting plaintiffs’ argument would eviscerate Section 230(c)(1); a defendant interactive computer service would be ineligible for Section 230(c)(1) immunity by virtue of simply organizing and displaying content exclusively provided by third parties.

Plaintiffs’ “matchmaking” argument would also deny immunity for the editorial decisions regarding third‐party content that interactive computer services have made since the early days of the Internet. The services have always decided, for example, where on their sites (or other digital property) particular third‐party content should reside and to whom it should be shown. Placing certain third‐party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third‐party content displayed to users based on, among other things, users’ geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third‐party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third‐party content in the first place, result in “connections” or “matches” of information and individuals, which would have not occurred but for the internet services’ particular editorial choices regarding the display of third‐ party content….

plaintiffs’ argument that Facebook’s algorithms uniquely form “connections” or “matchmake” is wrong. That, again, has been a fundamental result of publishing third‐party content on the Internet since its beginning….it would turn Section 230(c)(1) upside down to hold that Congress intended that when publishers of third‐party content become especially adept at performing the functions of publishers, they are no longer immunized from civil liability….

plaintiffs argue, in effect, that Facebook’s use of algorithms is outside the scope of publishing because the algorithms automate Facebook’s editorial decision‐making. That argument, too, fails because ‘so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process.’

I’m hoping this case heralds the permanent end of the ridiculous algorithms-defeat-230 argument. The majority rebuts every facet of the argument, emphasizing–as it obvious to everyone but Section 230-haters–that deciding what to publish is an editorial function, whether it’s done by people or machines. If you still think the algorithms-defeat-230 argument is credible enough to make in writing–especially in a court filing–I’ll be eager to see how you get around the passages quoted above. Bonne chance!

Facebook Isn’t the Information Content Provider. What does it mean to “develop” content? That issue tortured the Roommates.com case, but this court simplifies things: “we have recognized that a defendant will not be considered to have developed third‐party content unless the defendant directly and ‘materially’ contributed to what made the content itself ‘unlawful.'” Facebook didn’t do anything like this:

Facebook does not edit (or suggest edits) for the content that its users—including Hamas—publish…

Facebook’s algorithms are content ‘neutral’ in the sense that the D.C. Circuit used that term in Marshall’s Locksmith: The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers. Merely arranging and displaying others’ content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the ‘develop[er]’ or ‘creat[or]’ of that content…

Plaintiffs also argue that Facebook develops Hamas’s content because Facebook’s algorithms make that content more ‘visible,’ ‘available,’ and ‘usable.’ But making information more available is, again, an essential part of traditional publishing; it does not amount to “developing” that information within the meaning of Section 230…

plaintiffs assert that Facebook’s algorithms suggest third‐party content to users ‘based on what Facebook believes will cause the user to use Facebook as much as possible’ and that Facebook intends to ‘influence’ consumers’ responses to that content. This does not describe anything more than Facebook vigorously fulfilling its role as a publisher. Plaintiffs’ suggestion that publishers must have no role in organizing or distributing third‐party content in order to avoid ‘develop[ing]’ that content is both ungrounded in the text of Section 230 and contrary to its purpose….

plaintiffs also argue that Facebook should not be afforded Section 230 immunity because Facebook has chosen to undertake efforts to eliminate objectionable and dangerous content but has not been effective or consistent in those efforts. However, again, one of the purposes of Section 230 was to ensure that interactive computer services should not incur liability as developers or creators of third‐party content merely because they undertake such efforts—even if they are not completely effective

In a footnote, the court adds: “We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook’s algorithms to, for example, de‐promote or block content it deemed objectionable.” Suck it, Sen. Cruz!

The court also rejects a bizarre argument by the plaintiff: “Nor does Facebook’s acquiring certain information from users render it a developer for the purposes of Section 230. Facebook requires users to provide only basic identifying information: their names, telephone numbers, and email addresses. In so doing, Facebook acts as a “neutral intermediary.” Moreover, plaintiffs concede in the pleadings that Facebook does not publish that information, and so such content plainly has no bearing on plaintiffs’ claims.”

Section 230 Preempts Civil Claims Based on Federal Crimes. The court agrees with Doe v. Backpage on this point. The dissent also grudgingly agreed, but wrote an odd footnote saying it was a close call.

JASTA Didn’t Repeal Section 230. “JASTA merely expanded Section 2333’s cause of action to secondary liability; it provides no obstacle—explicit or implicit—to applying Section 230.”

Section 230 Can Apply to Extraterritorial Conduct. “The regulated conduct—the litigation of civil claims in federal courts—occurs entirely domestically in its application here.”

Section 230 Supports Motion to Dismiss. “the application of Section 230(c)(1) is appropriate at the pleading stage when, as here, the ‘statute’s barrier to suit is evident from the face of’ plaintiffs’ proposed complaint.” Cites to Ricci and Marshall’s Locksmith.

The Dissent

The dissenting judge does not agree with the majority’s Section 230 conclusions. To illustrate why, he uses an analogy involving telephone call conversations and says it doesn’t make sense to characterize the conversationalists as “publishers.” Huh? First, the capacious definition of “publish” in common law defamation does, in fact, apply to phone calls. More importantly, as the majority points out, telephone calls aren’t covered by Section 230 because they aren’t on the Internet. So by invoking an offline analogy, I assume the dissenting judge is normatively resisting Section 230’s exceptionalism.

The dissent reviews Section 230’s legislative history at some length. He concludes that Section 230 was only meant to suppress indecent material online from kids. He says, “we today extend a provision that was designed to encourage computer service providers to shield minors from obscene material so that it now immunizes those same providers for allegedly connecting terrorists to one another….The text and legislative history of the statute shout to the rafters Congress’s focus on reducing children’s access to adult material.”

With that inaccurately narrow characterization of Congress’ goals, the dissent denigrates Section 230: “Congress grabbed a bazooka to swat the Stratton‐Oakmont fly.” Whoa.

(This is also historically wrong–the Stratton Oakmont ruling wasn’t a fly; it actually was a metaphorical bazooka that threatened to blow up the Internet).

On the “merits” of Facebook’s Section 230 defense, the dissent says: “When a plaintiff brings a claim that is based not on the content of the information shown but rather on the connections Facebook’s algorithms make between individuals, the CDA does not and should not bar relief.” Huh? What exactly are those “connections” doing, other than publishing content between users? Though the dissent doesn’t use the terms, implicitly the dissent buys into the platform vs. publisher distinction. To the dissent, when Facebook is enabling community engagement, it’s apparently functioning as a platform, not a publisher. “Facebook is telling users—perhaps implicitly, but clearly—that they would like these people, groups, or events….Facebook may be immune under the CDA from plaintiffs’ challenge to its allowance of Hamas accounts, since Facebook acts solely as the publisher of the Hamas users’ content. That does not mean, though, that it is also immune when it conducts statistical analyses of that information and delivers a message based on those analyses…the creation of social networks goes far beyond the traditional editorial functions that the CDA immunizes.”

The platform v. publisher distinction is irrelevant to Section 230. As the majority clearly and emphatically points out, every function denigrated by the dissent is an intrinsic part of Facebook’s publication process. It’s like how a newspaper puts some articles on the front page with huge headlines and other content buried in the last section with a small headline. Then newspaper is implicitly saying “you’ll like some stories more than the others” based on its editorial decision-making. So is Facebook.

The dissent ends with about a dozen pages recapping random gripes about Facebook’s sins. Your honor, do you Facebook much?

I did agree with the dissent on one key point. He says: “Section 230(c)(1) limits liability based on the function the defendant performs, not its identity. Accordingly, our precedent does not grant publishers CDA immunity for the full range of activities in which they might engage.” I agree 100%, but that’s rebutting a strawman argument, no? The judge seems to be saying that Section 230 doesn’t create a lawless no-man’s land, which no one ever actually claims. The real point, as made by the majority opinion in this case and many other rulings, is that Section 230 applies to the publication function regardless of how plaintiffs try to obfuscate the facts by referring to publication functions by some other name.

Some Implications

Will There Be En Banc Review? A deep panel split like this can lay the foundation for en banc review. But in light of the emphatic Herrick v. Grindr pro-Section 230 ruling earlier this year, I wonder if any other Second Circuit judges would rally behind Judge Katzmann’s dissent? I think that’s unlikely.

No Reference to Causation. The opinion relies solely on Section 230. It doesn’t reference the causation issue at all. That makes this case an outlier. A couple other rulings in the “material support for terrorists” line of cases have relied on Section 230, but most have disposed of the claims on the statutory prima facie elements or for lack of proximate causation. In fact, the other two appellate rulings in this line of cases both relied exclusively on the proximate causation doctrine to dismiss the cases. So this ruling supplements those rulings by highlighting that Section 230 can be a separate and independent ground for dismissal.

“Material Support for Terrorist” Lawsuits Are Doomed. I now count 11 different courts that have rejected claims that social media providers materially contribute to terrorists, and that now includes three appellate courts (also Fields v. Twitter in the 9th Circuit and Crosby v. Twitter in the 6th Circuit). I have no idea how many more cases are pending in the system, and it wouldn’t surprise me if plaintiffs’ lawyers keep filing more suits against social media providers despite this impressive record of futility. Nevertheless, all of these cases are doomed, and the growing wall of precedent seems insurmountable now. Will a 12th judge really decide to reject the conclusions of the prior 11 judges? Will the fourth appellate court really decide to go against the conclusions of the prior 3? Anything is possible, and as the dissenting judge illustrated, these cases can provoke passionate responses. Still, I think the precedent is so large, and so uniform, that the odds of future plaintiff wins have become vanishingly small.

Section 230 Keeps Rolling in Courts. It’s been a bad year for Section 230 cases and online marketplaces, including HomeAway v. Santa Monica, Oberdorf v. Amazon, and State Farm v. Amazon. In all other respects, 2019 has been very kind to Section 230 defendants. A string of appellate court rulings in 2019 (including Herrick v. Grindr, Marshall’s Locksmith v. Google, Daniel v. Armslist, and this case) have used broad and sweeping pro-defense language, and many lower court opinions have been similarly favorable. Of course, all of that is likely to be squashed with Congress’ next amendment to Section 230.

Will This Ruling Contribute to Pressure to Amend Section 230? Congress remains concerned about terrorist content online, even though the social media providers have taken major–and anti-free-speech–steps to reduce that content, including the deployment of GIFCT. So there is already plenty of evidence that the social media providers are working to curb terrorist content despite the providers’ lack of legal liability. I think we’d all agree that the services have more work to do–especially to unradicalize the domestic terrorists who are murdering children after being emboldened by the racist/anti-immigrant/anti-Muslim/anti-Semitic rhetoric emanating from our own government.

Still, Congress could be concerned about ensuring financial recompense to terrorist victims. So the issue of terrorist content online creates another friction point with Section 230. However, let me point out the obvious: amending Section 230 won’t change the result in these cases because, as I indicated, the cases usually fail on statutory or proximate causation grounds, not Section 230. Congress would need to fix all of those issues–if it can–while avoided the obvious First Amendment problems before it could find a way to turn social media providers into financial insurers for terrorist victims. In other words, though Section 230 may look like it curbs lawsuits by terrorist victims, it really isn’t the barrier.

Case citation: Force v. Facebook, Inc., 2019 WL 3432818 (2d Cir. July 31, 2019)

Prior Blog Posts:

* Eleventh Lawsuit Against Social Media Providers for “Materially Supporting Terrorists” Fails–Palmucci v. Twitter
Another Appellate Court Rejects “Material Support for Terrorist” Claims Against Social Media Platforms–Crosby v. Twitter
Tenth Lawsuit Against Social Media Providers for “Materially Supporting Terrorists” Fails–Sinclair v. Twitter
Ninth Lawsuit Against Social Media Providers for “Materially Supporting Terrorists” Fails–Clayborn v. Twitter
Eighth Lawsuit Against Social Media Providers for “Materially Supporting Terrorists” Fails–Copeland v. Twitter
Seventh Different Lawsuit Against Social Media Providers for “Material Support to Terrorists” Fails–Taamneh v. Twitter
Another Social Media “Material Support to Terrorists” Lawsuit Fails–Cain v. Twitter
“Material Support for Terrorists” Lawsuit Against YouTube Fails Again–Gonzalez v. Google
Fifth Court Rejects ‘Material Support for Terrorism’ Claims Against Social Media Sites–Crosby v. Twitter
Twitter Didn’t Cause ISIS-Inspired Terrorism–Fields v. Twitter
Section 230 Again Preempts Suit Against Facebook for Supporting Terrorists–Force v. Facebook
Fourth Judge Says Social Media Sites Aren’t Liable for Supporting Terrorists–Pennie v. Twitter
Another Court Rejects ‘Material Support To Terrorists’ Claims Against Social Media Sites–Gonzalez v. Google
Facebook Defeats Lawsuit Over Material Support for Terrorists–Cohen v. Facebook
Twitter Defeats ISIS “Material Support” Lawsuit Again–Fields v. Twitter
Section 230 Immunizes Twitter From Liability For ISIS’s Terrorist Activities–Fields v. Twitter