Court Dismisses School Districts’ Lawsuits Over Social Media “Addiction”–In re Social Media Cases

[Warning: this is a 5,600 word blog post].

There are two critically important cases over “social media addiction” pending in California state court and as an MDL in the federal Northern District of California. It is an all-out brawl in federal court, with no-expense-spared battles over each and every picayune litigation issue. I can’t see what’s happening in state court, but I have no reason to believe that it’s any less contentious.

The cases reached important milestones last Fall, when both the federal and state court judges denied the social media defendants’ Section 230 motions to dismiss. I vigorously disagree with both rulings, and I wonder if they will survive the inevitable appeals.

Despite the importance of those Fall 2023 rulings, I never blogged either. I had completed a first draft in the state court ruling. However, before I finalized it, the federal court ruling came out. I then pivoted to cover both in the same blog post, but that was too big a project for my time window and it stalled out. As part of catching up today, later in this post I will share my draft on the state court 230 dismissal for the first time. You might enjoy seeing my angst and how the court’s opinion flummoxed me repeatedly, but the post is out-of-date otherwise. My coverage of the federal court non-dismissal will forevermore remain in blog purgatory.

Today’s post focuses on the social media defendants’ efforts to dismiss the parallel lawsuits by the school districts. Dozens or hundreds of school districts have joined the fray, arguing that social media addiction has harmed their campuses, primarily by draining the schools of counseling and other resources. The state court judge skillfully dissects the arguments using a wide range of legal theories, from the economic loss doctrine to Section 230.

Ultimately, I understand why the school districts joined the lawsuit–on the can’t hurt, might help theory that maybe they could get a little money for no additional work on their part. But by joining the lawsuit, the school districts had to admit that they were failing to support their students. Seeing a district trying to blame social media for the problems among their students, instead of the many and partially overlapping large and small social problems that plague our youth, would have angered me if I was a taxpayer in that district. We’ll see if this ruling stands on the inevitable appeal–and how parents react to the districts’ failings in court. It’s possible school districts trying to get something for nothing will find out that there was a hidden cost after all.

This opinion reaches the obvious (to me) conclusion that the school districts have no basis to sue here. Their claims were always overreaching and never meritorious in my mind. However, this is not the final word in the matter. The school districts have also joined the federal lawsuit, which has not ruled on their status, and of course all opinions in both cases will be appealed eventually to the highest court. Still, because this result is so intuitive to me, I expect the outcome will stand.

As usual, having gotten the big picture points out of the way, I now dive into the details. If your head is already spinning, you might stop here.

* * *

“The School Districts allege ‘that Defendants have created and promoted their addictive social media platforms to the School Districts’ students, ‘resulting in substantial interference with school district operations and imposing a large burden on school districts, who are often the number one provider of mental health services to youth.'”

“The School Districts list the following “costs and resource expenditures” incurred by the School Districts “to address students’ problematic social media use”: (1) costs associated with addressing or preventing students’ use of Defendants’ platforms in schools; (2) costs associated with having to provide “disciplinary services,” parent notification, “revised teaching plans,” mental health services to address students’ “behavioral issues” and harmful addiction to Defendants’ platforms; (3) property damage caused by students resulting from students’ use of Defendants’ platforms; (4) costs associated with investigating and responding to threats made against schools and students on Defendants’ platforms; and (5) costs associated with updating school policies and handbooks.” Consider the myriad of other reasons why these costs might increase…some obvious thoughts include a pandemic, school shutdowns, the loss of a “third-space” for students, a political insurrection, and an ever-spiraling heat index.

The school districts’ claims include negligence and public nuisance. Both fail.

Negligence

The school districts explain they “had to make significant economic expenditures to deal with students’ mental health, to overcome attention deficits that retard the educational mission, to provide additional supervision of students and to purchase sleeves to bar transmissions to student cell phones.” A Florida school district also alleged that student damaged a sink while acting out.

Other than the sink, the court says these alleged damages trigger the “economic loss” rule, which precludes negligence claims for “damages not caused by personal injury or injury to property.” (This traces back to the Cardozo opinion in Ultramares that you probably read in 1L Torts). The court concludes that all of the school districts’ allegations are either covered by the economic loss rule or Section 230.

The court explains the policy stakes:

the School Districts allege that student conduct attributable to the actions of the Defendants adversely affected the school learning environment and student mental health. But the School Districts do not offer any persuasive test for determining the boundaries for where and when liability for such alleged harm would rationally end. As to temporal indeterminacy, the effects of addiction on students could go on for years even if Defendants were to end the practices challenged by the School Districts. As to the problem of determining who is entitled to relief, there is no reason why private and parochial schools could not make the same claims asserted by the School Districts that have brought suit. Colleges also could state claims based on the continuing effects of the addiction and mental health problems (such as body dysmorphia, self-harm and depression) alleged to result from minors’ exposure to Defendants’ platforms. Employers could allege a claim for the value of work missed by employees suffering from mental health problems caused by their addiction to social media platforms when they were minors. Minors’ health problems caused by social media also could cause increased expenses for public health services. Thus, the theory of recovery and duties alleged in this case present problems of uncabined liability that fall squarely within the rationale for the economic loss rule.

In Rhode Island, the economic loss rule doesn’t apply to consumer transactions. That exception doesn’t apply here because “the School Districts are not individual consumers of Defendants’ platforms who need special protections but are not protected by contract.”

Florida has a possible exception to the economic loss rule for parties in a special relationship. That exception doesn’t apply here because “the allegations in the Complaints can be read to state that the School Districts have a special relationship with their students, but those allegations concern the School Districts’ educational responsibilities to their students, not responsibilities of Defendants to the School Districts.” Furthermore:

Many persons and entities have an interest in, and are adversely affected economically by, the alarming rise in youth mental illness alleged by the School Districts. The persons and entities within the “zone of risk” for economic harm that the School Districts would have this court recognize include youth organizations and their leaders; educational institutions of all kinds (not just public elementary and secondary schools); medical providers and medical facilities that may be required to provide care for mentally ill minors affected by social media without full compensation for the medical services; employers; and siblings or other family members whose social circumstances are negatively affected by a minor relative’s social media addiction and resulting mental illness….

The School Districts cannot articulate a defined class of persons or entities who are uniquely recognized as being within the zone of risk of Defendants’ conduct toward the minors who use their social media platforms.

To get around the economic loss rule, the school districts claimed that they suffered property damage because the losses took place on school property. The court isn’t fooled by this. “The School Districts’ argument improperly confuses breach (i.e., interference with property rights) with the resulting damage (i.e., physical damage to property). ”

That leaves the property damage from the broken sink in Florida, allegedly the result of a TikTok challenge video.

The court questions several elements of negligence.

  • Foreseeability. “The School Districts fail to point to any factual allegations from which this court could conclude that the destruction of, say, an elementary school bathroom sink in Florida was the foreseeable result of the mental and emotional harms negligently inflicted on minor users of Defendants’ platforms.”
  • Connection. “there is only a tenuous connection between, on the one hand, Defendants’ design, operation, and promotion of the platforms and, on the other, a minor user’s decision to destroy or deface school property. Between Defendants’ actions and the School Districts’ alleged property damage lie the alleged intentional actions taken by third parties to destroy property. There are no allegations suggesting that every minor who used the platforms was mentally or emotionally harmed in such a way that he or she was preconditioned to destroy or deface school property. To the contrary, the allegations state that the mental and emotional harm suffered by minor users led to a wide variety of different reactions and behaviors on the part of those minor users…By allegedly causing minor users to suffer mental or emotional harms, Defendants may have made those minor users more likely to “act out”; but this court cannot conclude that the way in which the minor students would “act out” was somehow connected to the destruction of school property.” The court also notes that the plaintiffs’ arguments are TikTok-specific, so they may not extend to the other social media defendants.
  • Policy considerations. “Under the School Districts’ theory of liability, any company that causes mental or emotional harm through interactions with a customer would be liable to any individuals or entities that are then harmed by that customer if a trier of fact might conclude that the mental or emotional harm played some role in causing the customer to “act out” and harm those individuals or entities….It is hard to imagine how any business or institution could function-or reasonably insure itself against potential losses-if its liability extends to all those who could reasonably be expected to interact with the individuals that are caused emotional harm by that business or institution. A restauranteur who negligently sold a diner spoiled food would be liable to the person who was later struck by the diner’s car, given that the diner’s poor driving may have resulted from suffering under the effects of food poisoning. An employer who wrongfully terminated its employee would be liable to any third party who was later assaulted by the employee when that employee was “acting out” as a result of his emotional distress caused by the wrongful termination. A parent who negligently raised her child so as to cause the child mental or emotional harm, would potentially be liable to any persons that are likely to come into contact with that child.”

The court distinguishes the JUUL cases because the vaping products left hazardous waste on school property. “by contrast, there is no allegation that the platforms themselves caused the property damage at the School Districts’ schools; rather, the operation of the platforms and the destruction of school property are separated by a far more complicated and attenuated chain of events, including the mental and emotional health of third parties and their harmful and willful acts.”

With respect to Section 230, the court starts:

Defendants’ alleged conduct has been enabled by a federal statute that withdraws the availability of common law remedies for a favored class of publishers of content. This federal statutory policy cannot be an excuse for twisting general common law principles as an end-run to try to fix, so to speak, what Congress has decided to accept. To expand the common law in order to provide a remedy for those who are indirectly affected by the negative consequences of social media for youth would create a broad web of indeterminate liability that the common law has heretofore refused to impose.

Thus, any claims based on TikTok challenge videos are preempted by Section 230: “the relevant negligence claims would be based on the allegation that certain minor users watched the challenge videos and were then encouraged by the challenge videos’ content to engage in “copycat” actions destroying or defacing school property. The negligence claims alleging harm based on the existence of this third-party content on Defendants’ platforms thus seek to hold Defendants liable as publishers of the third-party content.”

The school districts tried to get around Section 230 by arguing that TikTok “actively promoted” the videos. Seriously? The court throws a whole stack of precedent back at this argument, including Wozniak v. YouTube, Dyroff, Force v. Facebook, In re Facebook. However, the court gratuitously declares that she would trash Section 230 if she wasn’t precedent-bound because “there have been very thoughtful opinions penned by well-respected judges that criticize the conclusion that an internet service provider is treated as a “publisher” of third-party content when it affirmatively recommends third party content to a social media user.” (I would vigorously dispute just how “thoughtful” those opinions were).

The Lemmon v. Snap workaround also fails because “the Speed Filter itself encouraged users to engage in dangerous driving; here, by contrast, it is the specific third-party content presented in the challenge videos and the recommendation of that content that allegedly encouraged minor users to destroy or deface school property.”

Nuisance

As I’ve mentioned before, the interplay of “nuisance” with Internet law is an interesting topic that was identified in the 1990s, often in conjunction with spam or other alleged trespasses, but has remained mostly theoretical because it’s rarely litigated and even more rarely analyzed in a court opinion. Here, we get a direct hit on the issue, indicating the difficulties of extending nuisance doctrines online.

The court summarizes why the nuisance claims fail:

The School Districts’ reliance on nuisance fails because the right not to be injured by the Defendants’ social media platforms is a right personal to the minors who used Defendants’ platforms, and individual injuries to health have not been recognized by any of the four States in question as a basis for nuisance liability, even when the individual harms are considered collectively….

the School Districts seek to extend tort remedies to reach, not the harm caused by the social media sites themselves, but the harm caused by the minor users of the social media sites…

The School District cannot elide the principles of foreseeability, duty, zone of risk, and proximate cause that govern the outcome of a common law claim of negligence by substituting an assertion that the outcome of Defendants’ conduct has been a nuisance.

The court also raises standing considerations, as if the school districts are bringing a class action lawsuit on behalf of their students.

In a footnote, the court adds “To the extent the Complaints might be read to allege an interference with the right to a public education, that right is personal to students and is not a right personal to the School Districts. ”

Personally, I have a hard time believing the plaintiffs’ lawyers ever genuinely expected to win on nuisance, but I assume they pursued it anyway as part of the lawfare.

Case citation: In re Social Media Cases, JCCP 5255 (Cal. Superior Ct. June 7, 2024)

* * *

The October 2023 State Court Opinion Denying the Social Media Defendants’ Motion to Dismiss the Minors’ Claims

[I’m now going to share my “raw” 3k word draft of my blog post on this court’s October 2023 refusal to dismiss the teenagers’ cases.  The underlying opinion was 89 pages and very dense, so this will extend the mind-numbing discussions. Given that this discussion is old news, you might decide to stop reading here. I wrote the draft in October and made no attempt today to update it to reflect new relevant developments, of which there have been several.

If you decide to read on, I welcome your thoughts about whether the October 2023 ruling is consistent with the June 2024 opinion. Obvious potential inconsistencies include the court’s treatment of Lemmon v. Snap and Dyroff.]

The court’s opinion proudly exhibits its normative bias: “The issue in this case is whether a social media company may maximize its own benefit and advertising revenue at the expense of the health of minor users of that social media company’s applications or websites.” Imagine a reframing: “can for-profit businesses encourage greater customer loyalty using product marketing?” or “can for-profit businesses be liable for some children’s harms when other children are deriving great benefits and the business can’t figure out which users are in which category?” When you see the judge accept the kids vs. profits framing, you know that the plaintiffs won the battle over the narrative.

Negligence Claim

Duty. “foreseeability weighs heavily in favor of finding that Defendants owe a general duty to the users of Defendants’ platforms. The Master Complaint is replete with allegations that Defendants were well aware of the harms that could result to Plaintiffs by their use of Defendants’ platforms…Plaintiffs here allege that the effect of Defendants’ algorithms and operational features on Plaintiffs’ frequency and intensity of use of the social media sites was not only foreseeable, but was in fact intended. And Plaintiffs allege that Defendants were on notice through their own research as well as through independent medical studies that this intended frequency and intensity of use of Defendants’ platforms risked adverse health effects for the minor users.”

Armed with this foreseeability determination, the court distinguishes (unpersuasively IMO) numerous other defense wins, involving gambling addiction, violent TV, and music-induced suicide.

The court summarizes: “there is no basis for deviating from general principles of negligence requiring Defendants to exercise due care in the management of their property for the safety of their customers.” What exactly does “management of their property” mean in this context? It’s almost as if the court is thinking about this case like a premises liability case, where a landlord doesn’t have enough lights or ignores a tripping hazard. Thus, the court’s framing is “off” by treating editorial decisions by a publication as the equivalent of “property management.” (This reminds me of the “forum administrator” framing used the Hassell v. Bird intermediate court, which got overturned by the California Supreme Court).

Proximate Causation.

The court falls back on the premises liability theory: “there is a close connection between Defendants’ management of their platforms and Plaintiffs’ injuries.” The services rightly pointed out the allegations suggest that some plaintiffs may have been on multiple services, which makes it hard to point the finger at any one service. The court says that’s a fact question that doesn’t need to be resolved on the motion to dismiss.

Section 230 Defense

Summarizing Lemmon v. Snap, the court says: “The Ninth Circuit has held that Section 230 does not bar a claim based on features of a social media site that have an adverse effect on users apart from the content of material published on the site.” This is an odd way of paraphrasing the case. It’s true that 230 doesn’t apply if the claim is based on something other than third-party content, but this “adverse effect on users” language is wholly manufactured by this court. It’s a weird way of framing the issue, too. Someone ALWAYS experiences “adverse effects” from publishing content; by definition, publication makes winners and losers.

It also mucks a key point. Lemmon v. Snap involved a content AUTHORING tool. Most of the plaintiffs’ allegations relate to content CONSUMPTION features. Lemmon v. Snap clearly says that claims based on third-party content remain covered by 230, and social media content consumption features uniformly involve third-party content.

By misparaphrasing Section 230, the court says: “Plaintiffs’ claims based on the interactive operational features of Defendants’ platforms do not seek to require that Defendants publish or depublish third-party content that is posted on those platforms.” This is obviously wrong as a legal and factual matter.

As a legal matter, Section 230 does more than just govern the publish/don’t-publish decision. It governs any liability based on third-party content. So if the harms plaintiffs experienced are due to third-party content, Section 230 applies.

As a factual matter, that’s exactly what’s happening in this case. Social media services derive value from, you know, enabling social engagement BETWEEN USERS. If users are “addicted” to anything, it’s the engagement with the peers. If users are “harmed” by trying to keep up with the Joneses, or by feeling socially ostracized from their community, it’s because of the THIRD-PARTY CONVERSATIONS that create those motivations.

The plaintiffs readily admit this. For example, the complaint alleges: “Defendants’ apps addict young users by preying on their already-heightened need for social comparison and interpersonal feedback-seeking.” Hold on–“social comparison and interpersonal feedback-seeking” sounds an awful lot like third-party content. What is the source of the “comparison” and “feedback” other than third-party content?

Take, for example, the Snap Streak. The streak measures users TALKING TO EACH OTHER. It’s metadata about the conversation. In other words, no user content, no streak. If the Snap Streak promotes “addiction,” it is purely to third-party content.

(I’ll also note that many products and services use “streaks.” Our Peloton tracks how many weeks in a row I’ve used the device, presumably as a way of shaming me if I take a week off. Do they promote addiction to their paywalled database of exercise videos by measuring my streak? Can I sue them for the psychological distress I suffer from owning a Peloton?)

Thus, when the court says “The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user,” this is intellectually dishonest. It may be true that no “particular” item of third-party content could be the cause of addicting another user, but Section 230 still applies if third-party content collectively creates the alleged harm because that’s still imposing liability for adding third-party content. In other words, the court added a novel requirement that the claim must be based on a specific item of third-party content, but the statute applies whether the liability is tied to a specific item of content or third-party content generally. In its prolix 89 pages, the court never justifies this specific doctrinal move. It should be an obviously reversible error.

The court points to continuous scrolling and auto-play functions as examples of how the plaintiffs aren’t making claims about “particular” items of content. But what’s scrolling or auto-playing in these examples? THIRD-PARTY CONTENT.

As another example of how the plaintiffs are anchoring their claims in third-party content, consider Snapchat’s “Spotlight” feature. The complaint says: “Snapchat’s Spotlight feature allows users to make videos that anyone can view, and Snap pays users whose Spotlight videos go viral.” This is a basic content monetization scheme prevalent among UGC services, and the videos are “third-party content” to Snap. (The fact that Snap pays users to license their content is irrelevant to Section 230).

The court rounds this issue up by making another misparaphrase: “Where a provider manipulates third party content in a manner that injures a user, Section 230 does not provide immunity.” (Cite to Hardin v. PDX). Per Roommates.com, it’s true that Section 230 adds the illegality to third-party content that wasn’t illegal. But here, the social media services simply made editorial choices about how to present third-party content. It’s as much a “manipulation” of third-party content as any other editorial decision made by services when publishing third-party content. By this “logic,” it’s a “manipulation” to showcase popular videos, which would mean Section 230 should drop away then if the video causes anyone any harm. But amplification and other promotional decision are squarely covered by Section 230. The court collapses that distinction in an untenable way that’s unsupported by the binding California precedent.

It’s not like the court ignores the first-party/third-party distinction. Indeed, the court expressly identified several areas where the plaintiffs were clearly reaching to regulate third-party content, such as third-party ads and says Section 230 would apply to those. The court also sees this as a jury issue: “It may very well be that a jury would find that Plaintiffs were addicted to Defendants’ platforms because of the third-party content posted thereon. But the Master Complaint nonetheless can be read to state the contrary-that is, that it was the design of Defendants’ platforms themselves that caused minor users to become addicted.” [Again, addicted TO WHAT?] The court treats this as a but-for causation question, i.e., Section 230 doesn’t apply only when third-party content is a but-for cause of the harm. That’s true, but an intellectual cheat, conflating negligence “proximate causation” principles with Section 230’s application to liability for third-party content. That’s a statutory question, not a common law causation question, and it makes no sense to link the concepts.

The court’s attempts to make the distinction only shows the logic fallacy. The court says flatly, “Section 230 does not shield Defendants from liability for the way in which their platforms actually operated.” What? That’s exactly what Section 230 does. Services publish third-party content. Of course they “design” their publication to enhance users’ experience and engagement. Every publication does that in service of rewarding content consumption. Treating publication design features as a separate basis of liability inevitably collapses into the content consumption question. The court tried to draw the line but did so in an obviously incomplete way. It’s because drawing such a line is futile. The court’s failing was to accept that futility.

The court then makes a novel but unsupported statutory interpretation twist. The court cites the statutory findings that Section 230 is intended to protect user control and weaponizes those to strip down Section 230:

the congressional policy of encouraging technologies that maximize user control should caution a court not to stretch the immunity provision of Section 230 beyond its plain meaning in a manner that diminishes users’ control over content they receive. So long as providers are not punished for publishing third-party content, it is consistent with the purposes of Section 230 to recognize a common law duty that providers refrain from actions that injure minor users by inducing frequency and length of use of a social media platform to the point where a minor is addicted and can no longer control the information they receive from that platform.

This is one of those lawyer tricks to redefine black as white. It misapprehends the statutory concerns about user control. Section 230 doesn’t step back if users can’t “control” their content consumption. Instead, the concern was that governments would usurp publishers’ editorial control so that content was regulated at the server level, rather than giving users the chance to control their content consumption at the client level. In other words, if users could opt in or out of content consumption–for example, by deleting the app–then Section 230 has done its job. Instead, this court’s ruling actually reimposes the server-level control that Section 230 was designed to prevent. By potentially imposing liability for user conversations, the holding could disrupt the ability of services to enable any of those conversations–even the ones that don’t cause any harm to any user, or worse, even the conversations that BENEFIT users. So by twisting the statutory words, the court endorses exactly the kind of government-imposed server-level control that Section 230 was supposed to preclude.

The court plays similar doctrinal games with the idea that Section 230 is preconditioned on giving parents control over their children’s usage: “where a plaintiff seeks to impose liability for a provider’s acts that diminish the effectiveness of parental supervision, and where the plaintiff does not challenge any act of the provider in publishing particular content, there is no tension between Congress’s goals.” This entire sentence is completely wrong.

Based on its multiple misparaphrases of the statute, the court distinguishes some relevant precedents:

  • Dyroff doesn’t apply because “liability was premised on the website’s publication and recommendation of third-party content and injury flowing from that content.” HELLO!
  • Doe II v. MySpace doesn’t apply because “the gravamen of the cause of action was for injury caused by content on the social media service.”
  • Prager U v. Google doesn’t apply because “Plaintiffs’ contentions concerning features that maximize minors’ engagement do not challenge algorithms that decide what content to publish.” SMH.

First Amendment Defense

The court frames the issue: “whether the design features of Defendants’ platforms that allegedly harmed Plaintiffs must be viewed as speech protected under the First Amendment.” Given that the service design features relate to the gathering, organizing, and disseminating of third-party content, this seems like an easy yes. Surprise! The court finds a way to reach the opposite conclusion:

the allegations can be read to state that Plaintiffs’ harms were caused by their addiction to Defendants’ platforms themselves, not simply to exposure to any particular content visible on those platforms. Therefore, Defendants here cannot be analogized to mere publishers of information. To put it another way, the design features of Defendants’ platforms can best be analogized to the physical material of a book containing Shakespeare’s sonnets, rather than to the sonnets themselves.

No. A book cover is still part of the editorial expression. So too is every design feature the court questions. So when the court says, “the First Amendment generally protects a publisher from liability where that publisher organized, compiled, and disseminated information that then harmed the plaintiff,” the court is 100% right but doesn’t know how to apply this principle. The court argues that “Design features of the platforms (such as endless scroll or filters) cannot readily be analogized to mere editorial decisions made by a publisher” and I don’t possibly see how the court can conclude that. Endless scrolls are literally a choice about how to present content. This is literally organizing, compiling, and disseminating content.

The court is still hung up on trying to tie the claims to specific content items when the “allegedly addictive and harmful features of Defendants’ platforms are alleged to work regardless of the third-party content viewed by the users.” Why does that matter to the First Amendment? It can still be a publisher’s “speech” even if it’s applied equally to different third-party content items.

Then the court says: “Moreover, Defendants fail to explain how a requirement that Defendants change the design features of their platforms would have a chilling effect on third-party speech or the distribution of such speech…. Holding Defendants responsible in tort for these addictive features does not violate the First Amendment even if less content is delivered as a result.” Seriously? Damages are on the table–drool-worthy amounts–and I’m pretty sure that would have a chilling effect on Constitutionally protected editorial decisions. But also…the euphemism “design features of their platforms” looks a lot different if it’s phrased as their “editorial choices about how to publish content.” Courts can’t compel publishers to “change” their design features. That’s censorship.

Other Rulings

No Products Liability. In one of the few bright spots of the opinion, the court rejects products liability claims:

Product liability doctrine is inappropriate for analyzing Defendants’ responsibility for Plaintiffs’ injuries for three reasons. First, Defendants’ platforms are not tangible products and are not analogous to tangible products within the framework of product liability.6 Second, the “risk-benefit” analysis at the heart of determining whether liability for a product defect can be imposed is illusive in the context of a social media site because the necessary functionality of the product is not easily defined. Third, the interaction between Defendants and their customers is better conceptualized as a course of conduct implemented by Defendants through computer algorithms.

It’s a persuasive explanation. I encourage you to read it all. Boiled down, the court confirms the division between products and services: “Defendants’ platforms are not tangible; one cannot reach out and touch them.” And because it’s a service, consumers may have significantly different experiences; unlike products which are uniform.

This is intuitive. Social media services are, well, services, not products. Slotting them into a products liability framework simply doesn’t fit.

But if other courts follow this approach, it has important implications for the Lemmon v. Snap “products liability” workaround to Section 230. Odds are that the workaround is a dead-end for plaintiffs, like so many of the common law exceptions to Section 230 that the Ninth Circuit has created over the years.

Negligent Undertaking. The plaintiffs apparently argued that it was negligent not to do age verification. Putting aside that compelled age verification is unconstitutional, the court says that age verification wouldn’t have changed the alleged harms.

Meta’s Fraudulent Concealment. “Plaintiffs have adequately alleged facts to survive a pleading challenge to their fraudulent concealment claim. Plaintiffs allege that Meta knew of its platforms’ defects, but that Meta nonetheless failed to share this information with its potential customers.” This reasoning could apply to pretty much every online service, so this is an exceptionally broad and troubling ruling. Because the court treats this as a failure-to-warn claim, Section 230 doesn’t apply.

Negligence Based on CCPA. The plaintiffs inexplicably tried to base a negligence claim on the defendants’ failure to comply with the California Consumer Privacy Act (CCPA), even though it says explicitly that there is no private right of action for CCPA violations.

Case Citation: In re Coordinated Proceeding Special Title Rule 3.550 Soc. Media Cases, 2023 Cal. Super. LEXIS 76992 (Cal. Superior Ct. Oct. 13, 2023)

* * *

The federal MDL decision is In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, 2023 WL 7524912 (N.D. Cal. Nov. 14, 2023).