Facial Recognition Database Vendor May Not Qualify for Section 230–Vermont v. Clearview

As you recall, Clearview AI is a facial recognition database vendor. Some law enforcement departments have adopted its service, but we aren’t sure how many. We also aren’t sure about its facial recognition accuracy (or, for that matter, how much “AI” is represented by the AI in its company name), but many facial recognition databases have dubious accuracy, especially when dealing with minorities. As a result, some states have restricted law enforcement use of facial recognition, even when provided by a third-party service like Clearview. Clearview also got some notoriety for claiming it had scraped 3 billion photos from services that don’t permit such bulk collection, like Facebook. I’m pretty sketched out by Clearview; but the main question is whether it has a clean legal bill of health. With mounting lawsuits against it, the courts will be answering this question soon.

In this case, Vermont’s AG sued Clearview for both unfair and deceptive acts pursuant to Vermont’s Consumer Protection Act and Fraudulent Acquisition of Data law. Clearview moved to dismiss. It gets a couple parts of the lawsuit; the rest survives.

Section 230

Clearview tries to clean out the lawsuit using a Section 230 defense. The FTC has had some success getting around Section 230; the state succeeds here as well.

The court reviews the standard 3-part test for a Section 230 defense:

ICS Provider. Everyone agrees Clearview qualifies.

3rd Party Info. The court says the state’s claims are “based on the means by which Clearview acquired the photographs, its use of facial recognition technology to allow its users to easily identify random individuals from photographs, and its allegedly deceptive statements regarding its product. This is not simply a case of Clearview republishing offensive photographs provided by someone else, and the State seeking liability because those photographs are offensive.”

Publisher/Speaker Claims. Rather than talk about publisher/speaker claims, the court seems to repeat that the state’s claims target first-party actions, not third-party content, such: “as screen-scraping photographs without the owners’ consent and in violation of the source’s terms of service, providing inadequate data security for consumers’ data, applying facial recognition technology to allow others to easily identify persons in the photographs, and making material false or misleading statements about its product.”

As explained by the opinion, I think the court’s Section 230 analysis is correct. However, some of the state’s claims look like they encroach Section 230. As just one example, contrary to the court’s claim that the case isn’t about republication of offensive photos, the state claimed it was an unfair practice to distribute photos of minors without their parents’ permission. Section 230 might very well apply to Clearview’s distribution of third-party photos, even without parental consent. So the court’s analysis probably needed more nuance.

First Amendment

The court says the First Amendment categorically doesn’t apply to deceptive ads. Further, claims that “Clearview provided inadequate data security and exposed consumers’ information to theft, security breaches, and surveillance lack a communicative element,” which puts them outside the First Amendment.

The court also expresses skepticism about whether Clearview’s software qualifies as speech:

The user simply inputs photograph of person, and the app automatically displays other photographs of that person with no further interaction required from the human user. In that sense, the app might not be entitled to any First Amendment protection. Complicating matters, however, is the fact that Clearview’s app is similar to search engine, and some courts have generally recognized First Amendment protection for search engines, at least to the extent that the display and order of search results involve degree of editorial discretion. [cite to Dreamstime]

Still, the court sidesteps the software-as-speech question because the state’s claims are content-neutral and survive intermediate scrutiny because:

[the claims are] based purely on the alleged function of the Clearview app in allowing users to easily identify Vermonters through photographs obtained unfairly and without consent, thereby resulting in privacy invasions and unwarranted surveillance. Presumably, the State has no problem with Clearview operating its app so long as the Vermonters depicted in its photograph database have fully consented….The State plainly has substantial governmental interest in maintaining fair and honest commercial marketplace, and in protecting the health, welfare, and privacy of its citizens.

Wow, this gets right into the heart of the privacy/free speech tension, no? The court seems to be saying that the First Amendment wouldn’t restrict Vermont creating a common law opt-in right to publish photos so long as it justifies the effort on privacy grounds. This surely can’t be right, just like states can’t create absolutist publicity rights for people’s faces/likenesses without navigating complex First Amendment questions.

The court says the state’s claims wouldn’t burden too much speech because the “State estimates that the relief it requests will leave more than 99 percent of Clearview’s database intact.” Wut? Is that based on the fact that Vermont has a small state population, so not many Vermonters are in the database? If the court declares that the state’s claims don’t violate the First Amendment, the same rationale ought to apply to every other state’s claims, which means the entire database becomes at risk. The court had to put on serious blinders to think that resolving the Vermonters’ claims has no implications for the rest of the database.

(Further, the court’s discussion ignores the problem of how a photo database operator knows the domicile of the person depicted in the photo. Does Clearview have a state location flag in its database; and if it does, how reliable is that info? Otherwise, if Clearview can’t reliably sort Vermonters from non-Vermonters, then saying the legal rule only affects 1% of the database is meaningless because Clearview won’t know *which* photos are in that 1%).

Clearview also argued that the state seeks to restrict its First Amendment right to scrape data. This struck me as a weird argument to make, and the court doesn’t accept it. The court says the state’s claims won’t restrict data collection, just usage. The hiQ line of cases interpret the CFAA, not to the Constitution, so they don’t help Clearview’s First Amendment claim.

The court rejects Clearview’s vagueness argument, saying “Clearview had fair notice that its alleged conduct implicates privacy interests and might reasonably be considered ‘unfair’ under the Act.”

Unfairness

The court says that the state is protecting residents’ privacy interests, which the court treats as a magic wand that legitimizes all of the state’s unfairness claims.

Clearview invoked State v. VanBuren, a case where the defendant defeated a prosecution for distributing NCP photos. The court distinguishes VanBuren in two ways. First, that was a criminal case, which has a higher burden of proof. Second, Clearview was doing more than just distributing photos; it was extracting biometric information. The court says this violated users’ expectations because Clearview’s activities conflict with the TOSes of the social media sites that were scraped.

Deception

Clearview claimed that the affected Vermonters weren’t “consumers” because they didn’t purchase the service. The court responds that when the state is suing, “consumers” refer to any residents.

The state objected to Clearview’s representation that consumers have erasure rights that varied by jurisdiction (without further specifying where the right was available). As the state argued, “this creates a reasonable belief in any Vermont consumer who is not privacy law scholar that they can take some action to protect their privacy.” (Hey privacy scholars, I guess you are out of this lawsuit, sorry). The court says this marketing could constitute a deceptive claim. Several other claims get the same result.

Clearview does better with respect to its claim that it only processes data when it “does not unduly affect your interests or fundamental rights and freedoms.” The court says this is a non-actionable opinion.

Conclusion

I don’t see how Clearview survives the onslaught of litigation it faces. Even if its core facial recognition service is legally permitted (which I’m skeptical about due to the anti-facial recognition laws in states like Illinois), Clearview almost certainly engaged in problematic scraping and overhyped marketing. Perhaps Clearview will figure out how to navigate all of the legal threats; but it will need a deep warchest to stay afloat that long. I think this ruling previews its future in court: though it is only a motion to dismiss, the court’s privacy concerns tilted the scale towards the state in most of the claims. That dynamic will make it hard for Clearview to win any cases. If I were a Clearview investor, I would be quite nervous about my investment at this point.

Case citation: State v. Clearview AI, Inc., Docket N0. 226-3—20 Cncv (Vt. Superior Ct. Sept. 10, 2020)