Section 230 Applies to Claims Over Hijacked Accounts (Except Maybe Verified Accounts)âWozniak v. YouTube
More Bitcoin litigation đ. This time, malefactors hijacked popular YouTube channels and uploaded videos promoting Bitcoin scams:
First, scammers will breach YouTubeâs security to unlawfully gain access to verified and popular YouTube channels with tens or hundreds of thousands of subscribers. The scammers then transfer ownership or control of the channel to themselves or a co-conspirator, rename the channel to impersonate tech celebrities or companies, and delete the channelâs pre-existing content.
Next, they upload and play scam videos they have created using pre-existing images and videos of famous tech entrepreneurs such as plaintiff Wozniak, Bill Gates or Elon Musk speaking at a cryptocurrency or technology conference, which is intended to deceive YouTube users into believing that the celebrity is hosting a live âbitcoin giveawayâ eventâŠ
The scam video is surrounded with images and text stating that, for a limited time, anyone who sends bitcoin to a specified account, via a QR code included in the video, will receive twice as much in return. The images and text often include trademarks, such as the Apple logo, and a link to a fraudulent web address that incorporates the particular tech entrepreneurâs name. However, after the users transfer their cryptocurrency in an irreversible transaction, they receive nothing in return and the scam is complete
(This gave me flashbacks to the old meme that Bill Gates would give you money if you just forwarded his email).
Negligent Security. The plaintiffs argued that YouTube âfailed to implement reasonable security measures to protect verified and popular YouTube channels from being regularly hijacked and transformed to broadcast the scam videos.â This is a variation on the negligent design arguments that some courts are misinterpreting, but this court NAILS IT:
this claim seeks to hold YouTube liable for allowing the scam videos to be shown on the hijacked channels. YouTubeâs actions allowing the scam videos to be shown on hijacked channels amount to a publishing decision not to prevent or alter the videos
This is Zeran redux. (Remarkably, the opinion doesnât mention Zeran at all). Recall that Zeran sued AOL for negligence for not handling the e-personation better; and the court said that AOLâs continued publication of the e-personation, even after AOL said it wanted to remove it, was still its publication decision. Over a quarter-century later, weâre still litigating the same issues and, at least in this opinion, reaching the same results.
To get around this, the plaintiffs cited the In re Zoom opinion involving Zoomâs liability for Zoombombing. The court responds:
We agree with the general proposition described in Zoom that section 230 immunity may not apply when a plaintiff alleges harm resulting solely from a security failure or statutory violation, independent of any harmful third-party content resulting from the violationâŠ
the negligence cause of action and the SAC as a whole demonstrate that plaintiffsâ security-based claim is predicated on the harmful content of the scam videos, without which there would likely be no lawsuit
This result is easiest to visualize by changing the facts. Assume that the malefactors hijacked the YouTube accounts but didnât change the content at all. In that circumstance, the scam plaintiffs wouldnât have been defrauded because they never would have gotten a scam promotion. In other words, the scam promotion was the sine qua non to the victimsâ harm, and the scam promotion was third-party content to YouTube. So the plaintiffs arenât really suing over the security breach. Itâs a but-for cause of the scam, but the third-party content is also a but-for and proximate cause. So of course Section 230 should apply. This principle seems so obvious and intuitive that Iâm consistently baffled when judges nowadays are reaching contrary conclusions.
Negligent Design. The court handles the negligent design claims the same way as the negligent security claims. The court distinguishes Lemmon v. Snap:
While the negligent design claim in Snap was not predicated on any third-party contentâindeed, the alleged harm flowed directly and solely from the negligent design and occurred without any third-party contentâthe same is not true here. Instead, the negligent design claim and the SAC as a whole are predicated on the scam videos, without which there would likely be no lawsuit. While a plaintiff may avoid application of section 230 immunity by alleging a negligent design claim that is independent of third-party content, that is not what plaintiffs alleged in the SAC here.
Failure to Warn. The court distinguishes Doe v. Internet Brands:
plaintiffsâ claim is predicated on the third-party content, of which they assert defendants had a duty to warn. Plaintiffs thus seek to impose liability on defendants resulting from the third-party information they publish on their platform. In Internet Brands, by contrast, the alleged duty to warn existed independent of any third-party content on the defendantâs platform
âClaims based on knowingly selling and delivering scam ads and scam video recommendations to vulnerable users.â Section 230 applies to ads, so thereâs no workaround there. However, the court distinguished the 9th Circuitâs Gonzalez v. Google decision (is it still good law after the Supreme Court remand?), which held that Section 230 didnât apply to revenue sharing with terrorist organizations:
plaintiffs do not allege that defendants gave money directly to the third-party scammers. There is no allegation of wrongdoing that is not dependent on the content of the third-party information. While plaintiffs allege that defendants knowingly profited from the advertisements and the associated criminal scheme, Gonzalez did not hold that profiting from third-party advertisements is beyond the scope of section 230 immunity. Instead, it distinguished between activity that depended on the particular content placed on YouTube, and activity that did not, such as directly providing material support to ISIS by giving them money.
The plaintiffs tried a typical âbut the algorithmsâ workaround, but the court distinguishes the goofy Wohl case:
plaintiffs have not alleged that defendants undertook any similar acts to actively and specifically aid the illegal behavior. Instead, they allege only that YouTubeâs neutral algorithm results in recommending the scam videos to certain targeted usersâŠThere is no allegation that YouTube has done anything more than develop and use a content-neutral algorithm.
Courts have consistently held that such neutral tools do not take an interactive computer service outside the scope of section 230 immunity. [cite to Dyroff].
A reminder that the terms âneutral toolsâ and âneutral algorithmsâ are oxymorons and recipes for confusion. Fortunately, the court stayed above that fray.
âClaims based on wrongful disclosure and misuse of plaintiffsâ personal information.â These claims included a promissory estoppel claim, trying to use the Barnes v. Yahoo case to get around Section 230. It doesnât work:
Defendantsâ alleged promises here are closer to those in Murphyâmore akin to general policies or statementsâthan those in Barnesâpersonalized and constituting a clear, well-defined offer.
Another reminder that Section 230 routinely applies to contract and promise-based claims when the goal is to hold the defendant liable for third-party content.
âClaims based on defendantsâ creation or development of information materially contributing to scam ads and videos.â The plaintiffs complained about the algorithms again, to no avail: ârecommending videos and selling advertisements may display and augment the illegal content, but it does not contribute to what makes it illegal.â
The court is more troubled by the overlay of YouTubeâs verification system. On one hand, this makes sense. The whole point of an identity verification system is to confirm that the reader can trust the speakerâs identity. If malefactors are free-riding on the verification to abuse user trust, then the verification system has failed completely.
On the other hand, the verification system canât be a categorical guarantee that the account will never experience a security breach, just as it canât prevent things like an accountholder surreptitiously handing over authoring rights to unverified third parties (at least until the handoff is detected). So what exactly do consumers think when they see a verification?
The court starts off with an unfortunately garbled statement of the law:
where a website operator either creates its own content or requires users to provide information and then disseminates it, thereby materially contributing to the development of the unlawful information, it may be considered responsible for that information
The first part of this statement is fine. If a website creates and disseminates its own content, then Section 230 doesnât apply. No arguments there. The other part of the statement is wrong, however. The Roommates.com case noted an exception to Section 230 when a defendant âdesign[s] your website to require users to input illegal content.â Notice the difference: this courtâs recapitulation would strip defendants of Section 230 if the website requires users to âprovide information,â whether that information was legal or âillegal.â By definition, every UGC service necessarily requires users to âprovide information,â i.e., the UGC. So the courtâs recapitulation mangles Roommates.com and, read literally, eliminates Section 230 for every defendant who needs it. That canât be right. Iâm hoping other courts will go back to Roommates.com and bypass this obvious misstatement.
Applying its garbled standard, the court says the plaintiffs allege that:
YouTube is wholly responsible for creating the information concerning the authenticity of the channel owners in the verification badges. Unlike the scam videos themselves, the third-party scammers did not create or develop the verification badgesâdefendants allegedly did. Nor is there any suggestion in the SAC that the verification badges contain information voluntarily provided by users and thus merely redirect or highlight third-party content. We therefore conclude the SAC adequately alleges that under section 230, YouTube is responsible for creating the information in the verification badges.
Note that the court doesnât engage with the extensive and conflicting precedent in this area, such as Roland v. LetGo (no 230 for saying that account was verified), Mazur v. eBay (no 230 for saying bidding was âsafeâ), and Milo v. Martin (230 applies when UGC site self-characterizes as telling the âtruthâ).
Nevertheless, the court dismisses the complaint because the plaintiffs didnât adequately show how the verification materially contributed to the fraud. With respect to the scam victims, the âallegations do not demonstrate that the verification badges played any significant or meaningful role in conveying false impressions concerning the source or authenticity of the videos.â As one example, only 7 of the 17 victims claimed they relied on the false verification. The complaint is also light on the timing interrelationships between when the verifications and hijackings occurred. The court gives the plaintiffs the chance to amend their complaint and make another attempt at resurrecting this 230 workaround.
Implications
I believe this opinion could be appealed to the California Supreme Court, but I wonder if either side will do so? YouTube won most of the ruling, so I think they will take their chances on remand. Similarly, though the plaintiffs had their complaint gutted, they have a chance to amend and might decide to prefer to allocate their resources trying to exploit the opening left by the appellate court. [UPDATE: The panel denied a rehearing on April 2. 2024 WL 1406533]
At its core, this is a cybersecurity case. YouTube accounts got hacked and plaintiffs are suing over the consequences. The plaintiffs are essentially proposing to treat YouTube as the financial guarantor of any hacks of verified accounts. YouTube canât prevent all hacks, nor can any other service. So what do we expect YouTube to do in those circumstances? If YouTube faces unlimited exposures for verified account hacks that it canât prevent, then it canât verify accounts, which would be a net loss for everyone. Could YouTube disclaim what it means to be âverifiedâ to reflect the potential intervening activities both within and outside its control? That sounds like a hard consumer education challenge.
Then again, it would be helpful to know what steps YouTube actually did in response to known hacks and how it has attempted to systemically harden the verified accounts from hacks. That shouldnât necessarily change its legal liability, but I hope YouTube has been making responsible decisions.
Though YouTube is the defendant in this case, this ruling is of high interest to Twitter, which is well-known for having issued blue-check verified accounts to pretenders and interlopers. (I mentioned this concern with the Roland v. LetGo decision too). Twitter is surely praying that YouTube gets a win here.
Itâs a relief to see such a strong Section 230 opinion coming from the California Court of Appeals. That court has become an unreliable steward of Section 230, as illustrated most recently by the Liapes trainwreck.
Iâve seen numerous stories frame this ruling as a win for the plaintiffs. Wow, the plaintiffs and their lawyers are really spinning it. I see it completely differently. First, the opinion was a strong and broad endorsement of Section 230 in the face of multiple now-typical plaintiff arguments to get around Section 230. Almost everything the plaintiffs threw at the walls didnât stick. Second, the court dismissed the complaint entirely, so the plaintiffs have to claw their way back into this case. The court gave the plaintiffs a roadmap to get around Section 230, but thereâs no guarantee they will do so. And even if they do, they still have to navigate the prima facie case. So the plaintiffs are a long way from winning, and itâs not at all guaranteed they will get there.
Case Citation: Wozniak v. YouTube, LLC, 2024 WL 1151750 (Cal. App. Ct. March 15, 2024)
Pingback: Ninth Circuit Does More Ninth Circuit Things in its Latest Section 230 Ruling-Diep v. Apple - Technology & Marketing Law Blog()