Catching Up on Some Social Media Addiction Rulings

The KGM bellwether trial is continuing in Los Angeles. Meanwhile, this post rounds up three related developments that are taking place outside the media spotlight.

Snap, Inc. v. Eighth Judicial District, 2026 WL 501564 (Nev. Supreme Ct. Feb. 23, 2026)

Related blog post about a Nevada Supreme Court ruling involving TikTok. Nevada alleges Snap harms users by addicting them. The court seemed quite comfortable with results-driven reasoning. This opinion read like the kind of Internet Law final exam student answer that gets a B- or C+.

Personal jurisdiction

The court seemed to hew pretty closely to the (uncited) Briskin ruling:

According to the State, Snap enters into contracts with each Nevada user through its terms of service, receives consent to collect each user’s personal data, then packages such data when it sells advertisement space that can target specific cities in Nevada and/or Nevada residents. Based on Snap’s business model, it has a strong interest in keeping users on its app for long periods of time—thereby supporting the State’s theory that Snap is purposefully designed to addict its users….

Snap enters into contractual agreements with its Nevada users through its terms of service, creates “communities” for Nevada students to engage with each other, advertises in the forum state, and collects user data to generate ad revenue….

Snap’s collection of user data and advertisement sales in Nevada establishes a pervasive presence that sufficiently relates to the underlying litigation, aimed at punishing Snap for negligently creating an addictive platform aimed at boosting its ad revenue.

First Amendment

While Moody disallows liability for the exercise of editorial functions, Snap fails to demonstrate how the State’s complaint only highlights such editorial functions to support its theory that Snap’s app was negligently designed to cultivate addiction in its younger users….

The State disavows any intent to impose such age verification or parental functions by means of its complaint. In looking to the complaint and its structure, the State points to the age verification and parental controls that Snap currently has in order to provide context for how Snap has negligently designed its app to be harmful to young users, not an attempted restriction on free speech…

the complaint describes the harm children have suffered as a result of Snap’s app and showcases the knowledge Snap had about the harm it continues to inflict on its users. At this point in the litigation, it cannot be said that the State seeks to compel speech from Snap

The court’s age authentication sleight-of-hand is dirty. Nevada claims it’s not requiring age authentication, but that Snap implemented age authentication incorrectly. Is there really a difference? If the state can base a negligence claims on its disapproval of a service’s age authentication, the state controls the service’s implementation.

Section 230

Consistent with its similar TikTok ruling, the court says it won’t evaluate Section 230 feature-by-feature.

Huh? Of course Section 230 should apply feature-by-feature. The California federal and state court social media addiction rulings spent many dozens of pages doing just that. The court here prefers to speak in generalities rather than specifics because….? The only substantive justification I could see is that it permitted the court to categorically reject 230’s application when otherwise it would have partially applied. Dirty.

This leads to another dirty move by the court:

here it appears that the features highlighted in the State’s complaint provide context for its claims—that Snap misrepresented the harm its app can cause to its younger users. It does not appear that the State seeks to hold Snap liable for third-party content—thereby taking the underlying complaints outside of Section 230 immunity

The court reductively oversimplified/overgeneralized Nevada’s claims to reach this summary of the claims. This nove allows the court to sidestep Snap’s many arguments that show how third-party content is very much at issue in the case.

Hartford Casualty Insurance Co. v. Instagram LLC, 2026 WL 560349 (Del. Superior Ct. Feb. 27, 2026)

I know insurance coverage disputes sound uninteresting, but they can be pretty spicy. TL;DR: Meta tendered its two California social media addiction lawsuit defenses to its insurers. The court says the insurers have no duty to defend, so Meta will have to bear the tens (hundreds?) of millions of defense costs as well as any damages awards.

Facebook argues that the insurance companies (there are multiple insurance companies involved, and some are finger-pointing at each other) ganged up on Facebook and forced Facebook to take positions in this coverage dispute that conflict with its positions in the underlying litigation. The court is unperturbed:

the conduct alleged in the Social Media Litigation—even when viewed through the lens of negligence—describes deliberate acts rather than accidents under the policies. Because the Court’s determination regarding Meta’s intent is based strictly on the face of the underlying complaints, it does not “overlap” with the factual truth of the allegations to be litigated in California.

Substantively, the insurance policies cover an “occurrence,” which all parties agree means an “accident.” The court says an accident is “an unexpected, unforeseen, or undesigned happening or consequence from either a known or unknown cause.” But “[a]n accident does not occur when the insured performs a deliberate act unless some additional, unexpected, independent, and unforeseen happening occurs that produces the damage.”

The court says:

Because Meta’s platform design choices—as alleged—were voluntary business decisions aimed at increasing engagement, they fall squarely within this broad definition of deliberate conduct….

Meta concedes that the plaintiffs allege these choices were made to “maximize engagement.” It is therefore unassailable that the complaints allege that Meta’s conduct was a purposeful effort to operate and maximize its platforms.

Given the financial stakes and the ambiguities of the law, this ruling surely will be appealed.

Breathitt County School District v. Meta Platforms, Inc., 4:22-md-03047-YGR (N.D. Cal. Feb. 9, 2026)

In the federal MDL, the social media defendants moved for summary judgment on some of the school districts’ claims. It doesn’t work.

Among its many rulings, the court again rejects the Section 230 defense:

The Court’s MTD order already established that certain platform design choices—the Actionable Defects—are sufficiently independent from content to avoid Section 230. Here, Breathitt provides deposition, documentary, and expert opinion evidence that each defendant’s platforms contained Actionable Defects which engender compulsive use. Breathitt also offers evidence specific to the district, including expert opinion and individual testimony, describing how social media use affects students within the school environment. Finally, Breathitt proffers affidavit and survey evidence linking students’ social media use to hard costs and opportunity costs incurred by the district. This evidentiary showing, as further outlined below, creates a triable issue of fact as to each of the Actionable Defects. The issue of evidence regarding the barred features is largely one of admissibility under Federal Rule of Evidence 403, not a basis for summary judgment