Social Networking Site Isn’t Liable for User’s Overdose of Drugs He Bought Via the Site–Dyroff v. Ultimate Software

[It’s impossible to blog about Section 230 without reminding you that it remains highly imperiled.]

This opinion is a contender for the most interesting Section 230 ruling of 2017. It deals with the troubling situation of user-to-user online drug sales; it discusses the thorny language of what it means to “develop in part” content; it decisively rejects the latest anti-Section 230 theory that data mining and targeted content recommendations somehow foreclose the immunity; it emphatically rejects a “failure to warn” workaround to Section 230; and the judge embraces Internet exceptionalism. My apologies for the length of this blog post, but there is a lot to unpack in this opinion. If you’re a Section 230 enthusiast, this case deserves your careful attention.


This case relates to a now-defunct service called “Experience Project,” run by Ultimate Software. The service allowed users to share their first-hand experiences anonymously “with the least amount of inhibition possible. The greater the anonymity, the more ‘honest’ the post….” The experiences were topically threaded, so a user could create an “I love heroin” thread and other users could post messages in the thread. As of May 2016, it had over 67 million “experiences shared.”

The victim, Wesley Greer, became addicted to opioids after suffering a knee injury. (These facts are recited by the court based on the complaint). He went to rehab 5 times. In August 2015, he searched Google to find heroin, and the search results led him to the Experience Project. He paid money to buy the right to post on the site, then posted to a group “where can i score heroin in jacksonville, fl.”

A dealer, under the username “Potheadjuice,” allegedly repeatedly offered to sell heroin in Experience Project groups such as “I love Heroin” and “heroin in Orlando.” Law enforcement knew about the dealer’s Project Experience activities. They arrested him twice in stings via the site. The dealer replied to Greer’s post, and the Experience Project emailed Greer a notification of the reply. Through communications on the site, Greer got the dealer’s phone number. They met in person, and Greer bought fentanyl-laced heroin. Greer died the next day due to fentanyl toxicity.

The plaintiff is Greer’s mom. The complaint alleges that the Experience Project:

(1) allowed its Experience Project users to anonymously traffic in illegal deadly narcotics;

(2) allowed users to create groups dedicated to the sale and use of such illegal narcotics;

(3) steered users to “additional” groups dedicated to the sale of such narcotics (through the use of its advanced data-mining algorithms to manipulate and funnel vulnerable individual users to harmful drug trafficking groups on Experience Project’s website);

(4) sent users emails and other push notifications of new posts in those groups related to the sale of deadly narcotics;

(5) allowed Experience Project users to remain active account holders despite (a) the users’ open drug trafficking on Experience Project’s website, (b) Ultimate Software’s knowledge of this (including knowledge acquired through its proprietary datamining technology, which allowed it to analyze and understand its users’ drug-trafficking posts) and (c) multiple law-enforcement actions against users related to their drug dealing on the Experience Project website;

(6) exhibited general and explicit antipathy towards law enforcement’s efforts to curb illegal activity on Experience Project’s website; and

(7) received numerous information requests, subpoenas, and warrants from law enforcement and should have known about drug trafficking on its site by its users, including — by the time of her son’s death — [the dealer’s] sales of fentanyl-laced heroin.

The Experience Project moved to dismiss the complaint on Section 230 grounds, except for the failure-to-warn claim, which it moved to dismiss on its elements. The court grants the entire motion to dismiss.

Section 230 Immunity

The court runs through the standard three-element test for Section 230:

Provider/User of an Interactive Computer Service. This was undisputed.

Publisher/Speaker Claims. The court’s discussion here is a little confusing, but the court says “courts have rejected plaintiffs’ attempts to plead around immunity by basing liability on a website’s tools” (cites to Gonzalez and Fields) before summarily concluding “Ms. Dyroff’s claims at their core seek liability for publishing third-party content.”

Based on Third Party Content. The plaintiff alleged the Experience Project developed the dealer’s content because “(1) its tools, design, and functionality abetted the content, at least in part, by recommending heroin-related discussions and steering Mr. Greer to [the dealer’s] posts; and (2) Ultimate Software is not merely a passive conduit for its users’ posts because it knew that Experience Project was an online market for drug dealers and users, and it shielded the bad actors through its anonymity policies and antipathy to law enforcement.”

The plaintiff continued that development occurs when the site “materially manipulates that content, including by passively directing its creation or by improperly using the content, after the fact.” [Huh? What does it mean to “passively direct” content creation, and how is that a “material manipulation” of the content? Doesn’t every UGC website do this? What does it mean to “improperly” use content? I’m not sure any of these allegations are grammatical, but I am sure none of it makes any sense.]

The plaintiff further explained that the Experience Project:

used “data mining” techniques and “machine learning” algorithms and tools to collect, analyze, and “learn[ ] the meaning and intent behind posts” in order to “recommend” and “steer” vulnerable users, like her son, to forums frequented by drug users and dealers. By identifying interested users and using its “recommendation functionality” to steer them to drug-related “groups” or “online communities,” Ultimate Software kept the users “engaged on the site” for Ultimate Software’s financial gain (through online ad revenues, gathering more valuable user data, and other means). This system — combined with Experience Project’s anonymous registration and its email-notification functionality that alerted users when groups received a new post or reply — “created an environment where vulnerable addicts were subjected to a feedback loop of continual entreaties to connect with drug dealers.”

None of these arguments work. The court responds flatly: “Ultimate Software is not an ‘information content provider’ merely because its content-neutral tools (such as its algorithms and push notifications) steer users to unlawful content.”

The court explains: “making recommendations to website users and alerting them to posts are ordinary, neutral functions of social-network websites” (citing, Gonzalez, Cohen and Fields). Furthermore, “it is the users’ voluntary inputs that create the content on Experience Project,” not the proprietary algorithms. So “even if a tool facilitates the expression of [harmful or unlawful] information, it is considered neutral so long as users ultimately determine what content to post, such that the tool merely provides a framework that could be utilized for proper or improper purposes” (cites to Goddard, Carafano, Klayman v. Zuckerberg). This “result holds even when a website collects information about users and classifies user characteristics.”

The court summarizes: “the Experience Project website’s alleged functionalities — including its user anonymity, algorithmic recommendations of related groups, and the ‘push’ email notification of posts and responses — are content-neutral tools.” Therefore, Section 230 protects the Experience Project from all of the plaintiff’s claims except the failure-to-warn claim.

Section 230 also applies despite allegations that: the Experience Project knew/should have known that users were selling drugs onsite; it shielded drug dealers from law enforcement (an allegation partially belied by the dealer’s repeated arrests due to onsite stings); and it had an onsite statement about the interaction between anonymity and law enforcement. The court says that the Experience Project’s “policy about anonymity may have allowed illegal conduct, and the neutral tools facilitated user communications, but these website functionalities do not ‘create’ or ‘develop’ information, even in part.”

Failure  to Warn

Two relatively recent Ninth Circuit rulings—Doe No. 14 v. Internet Brands (the ModelMayhem case) and Beckman v.—held that Section 230 didn’t immunize failure-to-warn claims. Those rulings were troublesome on multiple fronts. First, the “risk” that allegedly triggered an obligation to warn was third party content or actions—exactly what Section 230 should immunize. Second, it seemed unlikely that online services have any positive duty to warn their users of potential harms, so the failure-to-warn workaround meant plaintiffs would still lose, just after higher litigation costs for everyone. Third, it’s unclear how online services would satisfy any obligation to warn. Either they would use broad, general, and unenlightening warnings, or they may have to issue warnings every time they hear from law enforcement or a private citizen that a member has engaged in malfeasance online or off, which for a site like Facebook would result in many, many warnings per hour (and lots of potential defamation claims by the warned-about users).

Fortunately (?), between this ruling and the denouement in the ModelMayhem and Beckman cases, it’s emerging that the failure-to-warn claims against UGC sites in fact aren’t tenable, so indeed they impose extra costs for no extra benefit.

Lack of Special Relationship. The court summarizes the legal standard: The Experience Project “can be responsible for its nonfeasance (its failure to act) if (1) it had a special relationship with a third-party actor and thus had a duty to control that actor, or (2) it had a special relationship with Mr. Greer and thus owed him a duty to protect him. The plaintiff argues that like any business, Ultimate Software has a ‘special relationship’ with its customers that creates a duty to warn them of known risks.”

The court considers the ModelMayhem and Beckman cases on remand. I don’t believe I blogged either remand ruling, and my apologies for missing them. In both cases, the district courts on remand held that the online services didn’t have a special relationship with their customers and therefore no duty to warn. The court says the cases “support the conclusion that a website has no ‘special relationship’ with its users.”

The plaintiff argued that online services are the 21st century equivalent of offline businesses with physical premises, which owe a duty to invitees. The court rejects the offline analogy:

If the court followed this approach, it would render all social-network websites potentially liable whenever they connect their members by algorithm, merely because the member is a member. This makes no sense practically. Imposing a duty at best would result in a weak and ineffective general warning to all users. It also “likely [would] have a ‘chilling effect’ on the [I]nternet by opening the floodgates of litigation.” Also, the court is not convinced that a bricks-and-mortar business (such as a bar where people meet more obviously) is a good analogue to a social-network website that fosters connections online. For one, allocating risk is (in part) about foreseeability of harm and the burdens of allocating risk to the defendant or the plaintiff. Risk can be more apparent in the real world than in the virtual social-network world. That seems relevant here, when the claim is that a social-network website ought to perceive risks — through its automatic algorithms and other inputs —about a drug dealer on its site.

Even if Experience Project knew about the dealer’s onsite activities, the court says: “that knowledge does not create a special relationship absent dependency or detrimental reliance by its users, including Mr. Greer,” and the plaintiff didn’t allege that. The plaintiff will likely allege “dependency” and “detrimental reliance” in an amended complaint.

Malfeasance. The prior discussion dealt with the Experience Project’s “nonfeasance.” With respect to the Experience Project’s alleged malfeasance:

use of the neutral tools and functionalities on its website did not create a risk of harm that imposes an ordinary duty of care. A contrary holding would impose liability on a social-network website for using the ordinary tools of recommendations and alerts. The result does not change merely because Experience Project permitted anonymous users.

Assumption of Risk. The assumption of risk defense is unnecessary after the court’s held that the Experience Project has no duty to warn. The court nevertheless tries to discourage the plaintiff further:

If it were to reach the issue, it would likely hold that the doctrine operates as a complete bar to his claim because Mr. Greer — who initiated the contact with [the dealer] by his posts on Experience Project and then bought drugs from him — assumed the obviously dangerous risk of buying drugs from an anonymous Internet drug dealer.

The court’s dismissal is without prejudice, so the plaintiff will likely try again. However, I don’t see how the plaintiff can allege better facts to overcome Section 230. And even if the plaintiff alleges better facts on the failure-to-warn claim, the court’s skepticism about the assumption of risk suggests a win isn’t likely there either. Maybe the plaintiff will find more receptivity on appeal, but this well-constructed opinion will pose a major hurdle.


What Does It Mean to “Develop in Part” Content?

Section 230 defines an “information content provider” as anyone who “creates” or “develops” content “in whole or in part.” The develop-in-part language is vexing. What does it mean to “develop-in-part” content, especially as something different from “create-in-part”? We don’t know. Read literally, the statutory language could mean that any defendant that “developed” 0.1% (or less!) of allegedly tortious/criminal content should not get Section 230 protection. Because we aren’t sure what it means to “develop” content, this 0.1% standard potentially means that lots of defendants could unexpectedly find themselves without Section 230 immunity. Thus, the “develop-in-part” provision could be the bomb that blows up Section 230.

Since the statute’s passage, courts have had the ability to read “develop-in-part” quite broadly. However, until the case, we saw very little attention paid to this provision. The case raised the visibility of this drafting trap, but it hasn’t changed the results in most cases. That may be in part because the opinion indicates that the defendant loses Section 230 protection only if it “develops-in-part” the alleged illegality of the content. If it just developed-in-part other aspects of the content, but not the illegal part, arguably the opinion steers the case towards a defense ruling.

Sorta consistent with this line of reasoning, this case seems to treat content “neutrality” as a complete defense to allegations that the defendant “developed-in-part” the content. (More on “neutrality” below).

In the statutory debates over SESTA and the Wagner bill, some folks—including people I consider to be Friends of Section 230 (FOS230)—have been encouraging Congress and courts to pay more attention to the develop-in-part provision. Because of the powerful implications of “develop-in-part,” I think this is a dangerous idea. Inviting courts to interpret “develop-in-part” more broadly has a real risk of backfiring. Instead, I think rulings like this get it right.

“Data Mining” Doesn’t Defeat Section 230.

In the last year or so, I’ve seen a new anti-Section 230 argument swirling (e.g., 1-800-LAW-FIRM’s suits over social media sites allegedly providing material support to terrorists and works by Profs. Julie Cohen and Olivier Sylvain). The argument basically goes like this: Section 230 was enacted at a time when websites were dumb. As websites have gotten smarter, including collecting and analyzing user data and deploying personalization algorithms, Section 230 no longer protects them because the services are doing something Congress never expected. If this argument doesn’t make sense to you, it’s perhaps because I did a bad job retelling it; but more likely, it’s because the argument doesn’t make any sense at all.

The 1-800-LAW-FIRM complaints have teed up the data mining issue, but it hasn’t made a difference in those cases. Indeed, those opinions didn’t really engage with the argument. In contrast, this ruling squarely and unambiguously rejects it—in such a persuasive way that I expect other courts will follow it. This attempted Section 230 workaround has heated up quickly, but it will quickly and quietly fade away as soon as the next anti-Section 230 meme-fad emerges.

Success of “Neutrality” Defense.

The term “neutral” or a variant appears in the opinion 17 times. However, like the opinion that launched its usage in Section 230 jurisprudence, the court never defines “neutrality.” That omission is unsurprising because the “neutrality” construct is nonsensical. As I’ve explained repeatedly, online service and their tools are never neutral and are always biased. Even an attempt to be “balanced” is a form of bias. That means that any legal test predicated on a tool’s “neutrality” is stacked against defendants.

Not surprisingly, when courts have explored a defendant’s neutrality, it creates the preconditions for bad results. For example, in JS v. Village Voice, the court transmogrified the test to say “Backpage did more than simply maintain neutral policies prohibiting or limiting certain content.” Huh? “Neutral policies” are an oxymoron, so every online service would fail that legal standard.

In contrast, this case is a refreshingly defendant-favorable application of the “neutral tools” principle from Were the Experience Project’s tools “neutral”? Of course not. For example, the site was built on a normative bias in favor of anonymity, and that normative bias laid the foundation for mischief that contributed to Greer’s death. Still, the court held that its tools were neutral.

Perhaps “neutrality” in this context really means something more like “not biased toward illegal content or actions.” Reframing the “neutrality” doctrine to say that would actually improve the doctrine’s clarity quite a bit.

The “Failure to Warn” Exception to Section 230 Doesn’t Work.

The ModelMayhem case sparked a failure-to-warn boomlet of cases. As this ruling lays out clearly, online services don’t have an inherent “special relationship” with their customers, so they won’t have a duty to warn, and the Section 230 workaround doesn’t work. If the appellate courts affirm these rulings, as I expect they will, the failure-to-warn fad should wane.

Embrace of Internet Exceptionalism.

The court says “Risk can be more apparent in the real world than in the virtual social-network world.” There are many circumstances where this is true. A bartender can easily do a visual, aural and olfactory check of a customer’s sobriety. A retailer’s checkout clerk can easily do a visual inspection of a customer’s age and demand a form of ID in borderline cases.

But are there circumstances where the opposite is true? A drug dealer could peddle drugs in obscure corners of a Walmart premise where employees and security guards aren’t watching, whereas an online service could set up a dumb word filter for the word “heroin” (or all Schedule 1 drugs) that would automatically flag any heroin discussions taking place in the virtual premises.

Perhaps the court means that a dumb word filter lacks the context that might be apparent in physical space. The filter could flag the instances of the word “heroin,” but the exchange of messages won’t provide all of the context about whether a drug deal is about to go down (especially if the users exchange phone numbers and take the conversation beyond the online service’s premises). And, of course, dumb word filters are easily circumvented with codewords, new slang and deliberate misspellings, while it’s harder to cloak activity in the physical world.

So, I think the court is right to embrace the Internet exceptionalism between the lack of duty in virtual premises despite the offline duty to invitees, but this aspect could have benefited from more exposition.

What Should the Experience Project Have Done Differently?

This case is extremely similar to an (uncited) case involving Topix, Witkoff v. Topix, also involving the online matching of a buyer and seller of illicit drugs that led to an overdose. Section 230 protected Topix as well, so the opinions reach the same legal result.

Yet, the fact that these tragedies keep occurring make me wonder what, if any, steps sites like Topix and the Experience Project should take to reduce the number of victims. In particular, any site encouraging site-wide anonymity should know that its “more honest” posts will also come with illegal behavior. Section 230 may allow these services to avoid liability, but it doesn’t eliminate their responsibilities to their community and to society generally. It’s probably time for our community to have a public conversation about what steps online services like Topix and the Experience Project should take regarding the online sale of illegal drugs.

Case citation: Dyroff v. The Ultimate Software Group, Inc., 2017 WL 5665670 (N.D. Cal. Nov. 26, 2017). The complaint.