Microsoft Can Terminate User Account for Allegedly Possessing CSAM. But What If It Made a Mistake?–Deutsch v. Microsoft

The plaintiff (a NY lawyer/financial executive?) allegedly uploaded CSAM to his Microsoft OneDrive folder in violation of Microsoft’s “Code of Conduct.” [Nomenclature note: CSAM is what used to be called child porn; the case calls it CSEAI]. Allegedly, PhotoDNA detected the material and a human confirmed it as CSAM. Following its standard protocol, Microsoft permanently blocked his account access and submitted a cybertip to NCMEC. The plaintiff initiated an arbitration against Microsoft alleging breach of contract, breach of the implied duty of good faith and fair dealing, negligence, breach of privacy, conversion, and consumer fraud. Microsoft defeated the arbitration on Section 230(c)(2)(A) grounds. The plaintiff appealed to a federal court, which upheld the dismissal (unsurprising due to the heavy judicial deference to arbitral decisions).

Microsoft’s reliance on Section 230(c)(2)(A) is a bit unusual given that most defendants don’t want to squabble over the provision’s “good faith” prerequisite. Then again, if you are looking for a “good faith” reason to remove content, alleged CSAM violations are pretty much a lock because of the massive legal exposure for not making the removal.

Here, the court says “Microsoft acted in good faith because it did not act with anticompetitive intent nor single out Petitioner.” The court giving Microsoft this benefit of the doubt (i.e., it is presumptively good faith unless there’s anticompetitive intent or singling out of the plaintiff) would be a doctrinal novelty if other courts followed it; but this is a review of an arbitration decision where the court didn’t need to be super-careful.

The plaintiff argued that Microsoft needed to produce the actual image in question. Given that sharing and possessing the image would be a federal crime if it is in fact CSAM, unsurprisingly neither the arbitrator nor the court wanted to take that step. Instead, the court endorses Microsoft’s subjective discretion: “it was Microsoft’s subjective determination to restrict, or block petitioner from its platform; whether Microsoft made a mistake is irrelevant to the inquiry as long as Microsoft demonstrated good faith.”

The court also upheld the arbitrator’s decision “that Section 230 immunity applies to consumer protection claims.”

As regular readers know, I think services like Microsoft have (and should have) the legal discretion to remove content and terminate accounts in their sole editorial discretion, even knowing that mistakes are inevitable. However, that legal standard allows the arbitrator and court to sidestep the critical questions underlying this case: did the plaintiff actually possess CSAM, and what did law enforcement do with Microsoft’s cybertip to NCMEC? I did a cursory search and didn’t quickly find any public records of an investigation or prosecution. (I emailed the plaintiff and he said he had “no contact whatsoever from law enforcement”). It would be troubling if law enforcement ignored legitimate cybertips, so one possibility is that law enforcement hasn’t acted because the tip wasn’t accurate.

With the cybertip hanging out there and the prospect of severe legal consequences if he actually possessed CSAM, I find it hard to believe that the plaintiff would intentionally escalate public scrutiny of the image unless he was extremely confident that the file in question wasn’t CSAM. That makes me wonder if this is a(nother) case of overclassification of material as CSAM…? As the Senate gears up for another review of the EARN IT Act predicated on the empirically unsupported claim that Internet services don’t take CSAM seriously enough, query whether this case highlights an opposite “false positives” problem that EARN IT would exacerbate–with potentially life-changing consequences for the mistargeted.

Case citation: Deutsch v. Microsoft Corp., 2023 WL 2966947 (D.N.J. April 17, 2023)

Some of the cases this situation brought to mind: