February 29, 2012
Healthcare Data Breach Victims' Lawsuit Tossed When They Can't Show Harm--Paul v. Providence
By Eric Goldman
Paul v. Providence Health Systems--Oregon, SC S059131 (Ore. Sup. Ct. Feb. 24, 2012)
A Providence employee left disks/tapes containing records for 365,000 patients in his/her car, and they were stolen. The opinion implicitly assumes that the data wasn't encrypted. The opinion doesn't explain why the employee had unencrypted patient data for a third of a million people lying around in a car. Unlike a deliberate security intrusion, there's no evidence that the thief sought the data or had criminal intent towards the data.
Nevertheless, the Oregon Attorney General couldn't ignore a data loss of this magnitude/ineptitude, and Providence settled with the AG by agreeing:
to contract with a credit monitoring company to provide two years of credit monitoring and restoration services to any patient who requested it, to reimburse any patient for any financial loss resulting from the misuse of credit or identity theft, and to establish a website and toll-free call center to assist patients with questions related to the theft. Under the agreement, defendant also paid the Attorney General more than $95,000. Defendant estimated the cost of the credit monitoring and other services that it agreed to provide at approximately $7 million.
Apparently, the AG's deal wasn't good enough for the privacy plaintiff's bar (at least, not to their personal fortunes), because 6 years after the settlement--the breach occurred in 2005; the AG settlement in 2006--the Oregon Supreme Court finally kiboshed the class action lawsuit.
The plaintiffs marshaled the following statements of loss:
* "financial injury in the form of past and future costs of credit monitoring, maintaining fraud alerts, and notifying various government agencies regarding the theft, as well as possible future costs related to identity theft"
* "noneconomic damages for the emotional distress caused by the theft of the records and attendant worry over possible identity theft"
However, the plaintiffs had to contend with the following facts:
* the AG settlement already provided some meaningful relief to affected patients, including some credit monitoring and a promise to financially compensate patients for adverse data misuse
* there was no evidence that any patient had suffered any financial loss or other adverse consequence due to the data loss. Indeed, there's no evidence that anyone had ever accessed the data on the disks/tapes (the court says doing so would require "specialized equipment").
The latter bullet point proves to be fatal to the plaintiffs' claims for common law negligence and the Oregon consumer protection act. Under both doctrines, the plaintiffs didn't allege a legally cognizable loss. The economic losses alleged by plaintiffs are simply mitigation steps to reduce the risk of future harm, and negligence law doesn't recognize these anticipatory steps:
the cost of credit monitoring that results...from the risk of possible future harm...is insufficient to state a negligence claim
Every court that has addressed damage claims for credit monitoring following the theft of computer records containing personal information -- but no wrongful use of that information -- has reached a similar conclusion.
The Ninth Circuit's Krottner v. Starbucks opinion doesn't get a mention, but it supports this outcome too. The court distinguished the First Circuit's Hannaford case on the basis that some data breach victims had actually experienced bogus credit card charges.
We are aware of no other jurisdiction that has allowed recovery for negligent infliction of emotional distress in circumstances where the alleged distress is based solely on concern over the increased risk that a plaintiff's personal information will, at some point in the future, be viewed or used in a manner that could cause the plaintiff harm.
Just to clarify, the court dismissed the claims based on the substantive elements, not on standing grounds. Article III standing doesn't apply given this was in state court. However, this ruling is consistent with the numerous cases dismissing data breach claims on Article III grounds.
I'd like to think we're nearing the tail end of data breach lawsuits like this where, irrespective of the data holder's malfeasance, nothing bad actually happened to the victims or (at this late date) is likely to happen. The plaintiffs' lawyers who brought this claim might be partially excused for their optimism because they filed the case so long ago, when it wasn't totally clear they would lose. Newly filed lawsuits can't claim that excuse. Going forward, I hope plaintiffs' lawyers are getting the very clear message from the courts: Make sure you have at least one truly injured data breach victim, or don't waste your time and money.
More of our extensive coverage of this topic.
California Appeals Court Says Emails That Don't Identify Sender Violate State Spam Statute – Balsam v. Trancos
[Post by Venkat Balasubramani]
Balsam v. Trancos, Inc., 2012 WL 593703 (Ca. Ct. App.; Feb. 24, 2012)
It seemed like most emailers and anti-spam activists have moved on from worrying about spam email to other things, such as social networks, but a few disputes continue to linger. One of them involves Dan Balsam, of danhatesspam.com fame, who is a lawyer and self-proclaimed anti-spam activist.
Balsam sued Trancos in 2008, alleging that he received numerous unsolicited emails from Trancos. After a bench trial, the trial court awarded Balsam $1,000 in liquidated damages for each of seven emails. (Here’s the prior blog post on this case: "Plaintiff Wins $7,000 Following Bench Trial on Claims Under California Anti-Spam Statute -- Balsam v. Trancos.") The trial court also awarded Balsam $81,900 in attorneys' fees. The trial court rejected Balsam's claims under California's Consumer Legal Remedies Act and held that Trancos' CEO, Brian Nelson, could not be held personally liable. Trancos appealed, alleging that the emails did not violate California’s anti-spam statute (B&P 17529.5(a)(2)). Trancos also argued that Balsam’s claims were preempted by CAN-SPAM. On appeal, the appeals court rejects both of Trancos’ arguments. Balsam cross appealed on the trial court's resolution of the CLRA issue and rejection of personal liability to Nelson personally. The court also rejects Balsam's arguments on cross appeal.
Violations of Cal. Code Sec. 17529.5(a)(2)
Section (a)(2) makes it unlawful for anyone to advertise in an email sent from or to a California email address where the email “contains or is accompanied by falsified, misrepresented, or forged header information.”
According to the opinion, Trancos’ company sent out emails from various "nonsense" domain names (e.g., misstepoutcome.com; modalworship.com; moussetogether.com) that were registered to Trancos via a privacy proxy. The emails did not identify Trancos, but mentioned that if recipients wanted to opt-out, they could forward the emails to USAProductsOnline, or click on a link provided in the email. USAProductsOnline was not a separately existing entity, but it had registered a PO Box. Nelson, the CEO of Trancos, testified that he registered the domain using privacy protection services because he and Trancos had been harassed in the past.
The court focuses on whether the California Supreme Court’s decision in Kleffman v. Vonage, which construed section 17529.5(a)(2), lets Trancos off the hook here. (See "Use of Multiple (Even Random or Garbled) Domain Names to Bypass Spam Filter Does not Violate Cal. Spam Statute -- Kleffman v. Vonage.") Kleffman involved emails sent on behalf of Verizon using nonsensical or random domain names that were designed to evade spam filters. Kleffman made a variety of unsuccessful arguments why using garbled or random domain names constituted “misrepresentation.”
The appeals court distinguishes Kleffman, saying that in Kleffman, all of the emails were sent using domain names that were “traceable” to Vonage’s marketing agent. In contrast, in this case, the court says the emails were not traceable to Trancos because Balsam could not determine the identity of the sender using a "publicly available database," and thus Trancos' email had misrepresented header information. The emails listed "USAProductsOnline" as the sender and provided a street address for USAProductsOnline, but this turned out to be a PO Box, and Balsam had to subpoena the information provided to the PO Box company to obtain the information provided by USAProductsOnline. The court also distinguishes the Fourth Circuit's decision in Omega World Travel v. Mummagraphics saying that, in that case, the identity of the sender was readily obvious to the recipient, who had no trouble tracking down the sender.
Preemption Under CAN-SPAM
Trancos also argued that Balsam’s state law claims were preempted by CAN-SPAM. The court notes the split of authority in federal courts regarding whether CAN-SPAM preempts claims which fall short of common law fraud, or whether CAN-SPAM only preempts claims which do not involve “falsity or deception.” Citing to the California Appeals Court’s decision in Hypertouch v. Valueclick, the court says that claims which allege any sort of falsity or deception escape preemption under CAN-SPAM’s preemption clause. Balsam’s claims therefore aren't preempted.
The thrust of the court’s decision is that emails have to identify some actual person or entity they are sent by or on behalf of, whether in the “from line” or the email body. Emails that do not so identify themselves violate the statute. There are a few problems with this from my perspective.
First, it broadens the definition of “header information” to include not just the from line but also the body of the email. The California statute does not define “header information” but Kleffman looked to CAN-SPAM’s definition, which clearly talks about either the human or computer readable parts of the “from line.” The overall structure of CAN-SPAM lends weight to the view that the header information prong does not deal with information in the actual body of an email.
Second, it injects the element of concealment into the California statute. It’s fair to presume that if the legislature intended the statute to cover not just misrepresentations of facts regarding an email's origin but also concealment, the legislature would have made that explicit.
Third, the FTC rules interpreting CAN-SPAM allow the use of private mail boxes (that the sender registers with a commercial mail receiving agency established under US Postal Service regulations) to satisfy the requirement of listing the sender's street address. The FTC announcement of these regulations indicate that the regulation accommodates the two interests of (1) law enforcement being able to track down the sender (ostensibly with a subpoena) and (2) recipients being able to communicate with the senders (by sending paper correspondence). Not only is the issue of identifying the sender comprehensively regulated by CAN-SPAM, CAN-SPAM regs allow the practice Balsam complained about.
The court distinguishes Kleffman. but I wasn’t persuaded by this. Everyone agreed in Kleffman that the emails were fairly traceable to Vonage, but importantly, the emails were traceable not to Vonage directly, but to its “marketing agent.” Because the emails all identified Vonage (who was being advertised in the emails), it’s tough to say for sure, but as far as identifying the actual sender of the email, Kleffman says nothing more than that if the emails can be identified as having been sent by some entity (e.g., Vonage’s “marketing agent”), that’s sufficient. Here, the problem seems to have been that the entities identified in the emails as senders were not actual legal entities.
The court’s decision slams the use of private registration services in the context of email marketing. Balsam previously tried to hold Tucows liable for emails sent via a domain name registered (privately) through Tucows. (See "Domain Name Privacy Protection Services Not Liable for Failure to Disclose Identity of Alleged Spammer -- Balsam v. Tucows.") Balsam was not successful in that case, but the court’s decision here contains plenty of bad juju towards the use of private registration services.
Although the California Supreme Cout may have better things to do with its time, this looks like a good candidate for review so it can clarify the scope of Kleffman v. Vonage.
It's unclear how much mileage Balsam and company will get out of this ruling. As the court notes, several cases have held that claims such as this one are preempted, and it's a likely bet that Balsam's subsequent defendants would remove on the basis of preemption and try to get the claims dismissed on this basis. This is undoubtedly a significant ruling, but it's unlikely to open the floodgates for judgments against emailers.
Plaintiff Wins $7,000 Following Bench Trial on Claims Under California Anti-Spam Statute -- Balsam v. Trancos
Use of Multiple (Even Random or Garbled) Domain Names to Bypass Spam Filter Does not Violate Cal. Spam Statute -- Kleffman v. Vonage
An End to Spam Litigation Factories?--Gordon v. Virtumundo
Fourth Circuit Rejects Anti-Spam Lawsuit--Omega World Travel v. Mummagraphics
Domain Name Privacy Protection Services Not Liable for Failure to Disclose Identity of Alleged Spammer -- Balsam v. Tucows
February 28, 2012
Reidentification Theory Doesn't Save Privacy Lawsuit--Steinberg v. CVS Caremark
By Eric Goldman
Steinberg v. CVS Caremark Corp., 2012 WL 507807 (E.D. Pa. Feb. 16, 2012)
CVS Caremark provided consumer data to pharma companies and data brokers. The plaintiffs alleged that the data transfers violated CVS's privacy policies, but CVS apparently disclosed only "de-identified" data as contemplated by HIPAA. Plaintiffs couldn't sue under HIPAA, both because CVS complied with HIPAA and because HIPAA doesn't enable a private cause of action for these violations. Although these facts implicate Sorrell v. IMS, that case didn't come up because the plaintiffs didn't sue under an analogous statute specifically pharmaceutical data transfers.
Instead, the plaintiffs sued under Pennsylvania's consumer protection act, claiming that CVS made material misrepresentations in its privacy policies about its data handling. The court dismisses the suit--with prejudice!--on two principal grounds.
First, it says that CVS told the truth in its privacy policies:
The plaintiffs do not allege that the defendants disclose Protected Health Information to third parties. Rather, they disclose de-identified information, which (a) federal regulations do not prohibit; and (b) is consistent with the defendants' statements that they safeguard information that "may identify" consumers.
To salvage the situation, the plaintiffs' lawyer tried to argue that the de-identified information could be re-identified by recipients, but apparently the plaintiffs' lawyer couldn't make the argument very cogently:
Although they admit that the information the defendants disclose to third parties is de-identified within the meaning of HIPAA, the plaintiffs have argued that it can be "re-identified." There is no such contention in the CAC, and plaintiffs' counsel admitted that the basis for such an argument comes from a single journal article and would take the form of expert testimony that a re-identification risk exists with respect to de-identified information generally, not as to the plaintiffs in this case.
It seems pretty clear that the lawyer didn't fully understand re-identification--at least, not well enough to explain how it might trump CVS's privacy promises. Thus, the court never really gets to the merits of the re-identification theory, but clearly it did not pique the judge's interest. Presumably the "single journal article" referenced is Paul Ohm's Broken Promises of Privacy article. Looks like Paul missed out on a potentially lucrative expert gig.
Second, the court rejects the consumer protection claim on two different standing grounds:
1) the named plaintiff didn't suffer any cognizable loss. The best the plaintiffs' lawyer could do was claim "the loss of the value of his demographic information, or the loss of an opportunity to pay less for his prescriptions with the understanding that the defendants would be profiting from the sale of his information." These types of losses have flopped repeatedly before, and they do so again (citing, among others, LaCourt, JetBlue and Low v. LinkedIn).
2) the named plaintiff didn't allege justifiably reliance on CVS's representations. To get around this specific requirement in Pennsylvania law, Plaintiffs tried to allege that CVS was a fiduciary; that goes nowhere.
The unjust enrichment claim fails because there was no expectation that the information provided to CVS would be compensated. The intrusion into seclusion claim fails because the plaintiffs voluntarily provided their data to CVS.
As we've already seen, privacy plaintiffs' lawyers are avid readers of the privacy scholarly literature, looking for new theories to help them grind their axes. Privacy scholars should be gratified by this practitioner attention. As we know, most law review articles never get read (my mom won't even read mine). As this case illustrates, privacy plaintiffs' lawyers may build their entire cases around the academic literature. Personally, I think this fact means privacy scholars need to ensure that their articles are ready for the rough-and-tumble world of profit-seeking class action litigation. It would be irresponsible for a privacy scholar to toss out a half-baked academic thought about new ways of suing over privacy, knowing that the plaintiffs' bar is looking for fresh meat--anything--to get past 12(b)(6) motions irrespective of the case's true merit. I'm not accusing Paul Ohm's article of being half-baked (far from it, it's one of the most interesting articles I've read in years); but I couldn't be as complimentary towards some of the other privacy scholarship I see, and I hope the thought of being potentially responsible for lots of wasted litigation activity will encourage all privacy scholars to honestly reflect on the social merits of their arguments.
February 27, 2012
Reputation Management Lawsuit Is Shot Down--Bernard v. Donat
By Eric Goldman
Donald Ray Bernard is an energy consultant, big game hunt tour operator, former lawyer and former law professor. His LinkedIn page. His Google search results look like the kind of search results I see when someone uses a reputation management service; I find SEOed vanity search results are often linked to a litigious hypersensitivity about reputation (see, e.g., the litigation fusillade from Bev Stayart). Unfortunately, like far too many lawyer-plaintiffs/law professor-plaintiffs, the judge has to teach him what the law actually says.
Bernard alleges that Donat went on an online rampage against Bernard's veracity and former legal practice, including an attack blog, posts at Complaintsboard and PissedConsumer, attack emails and postings to Scribd. Bernard sued Donat for Lanham Act false advertising, defamation and tortious interference. In this ruling, Judge Whyte dismisses the Lanham Act false advertising claim as unmeritorious (with leave to amend), which (if Bernard can't successfully replead) will result in the state law claims going to state court.
The opinion doesn't say exactly who Donat is, but it implies that Donat is a rival in the energy and hunting industries. Nevertheless, the opinion says Bernard didn't allege any competitive injury/diversion from Donat's online activities, and in particular, that Donat's posts about Bernard's legal career won't necessarily affect their competition in the hunting business. Judge Whyte goes further to say that Bernard didn't show the online posts were "commercial advertising or promotion" or were even commercial speech at all.
Unfortunately, this opinion doesn't provide a clear statement why Donat did what he did. On the one hand, it seems entirely plausible that Competitor A in a personal services business (such as big game hunting tours, where consumer trust is essential) could hurt Competitor B by casting doubt on the person's general trustworthiness. On the other hand, Donat is free to speak the truth as a concerned citizen, and if that's what's going on here, reputation management/"right to forget"-style lawsuits to cover up truthful facts are a misuse of the court system. We don't know which styling fits these facts yet.
Either way, the Lanham Act false advertising isn't designed to govern activity like negative consumer reviews and gripe sites. To me, that's a feature, not a bug. Unfortunately, the Lanham Act's poor drafting encourages far too many meritless assertions over social discourse.
One oddity: Donat apparently didn't bring an anti-SLAPP motion, even though this lawsuit superficially looks like a SLAPP and even though the lawsuit is in CA and therefore governed by CA's broad anti-SLAPP law. Donat is proceeding pro se, so perhaps that explains the omission.
February 24, 2012
Banning Sex Offenders from Social Networking Sites is Unconstitutional--Doe v. Jindal
By Eric Goldman
Doe v. Jindal, 2012 WL 540100 (M.D. La. Feb. 16, 2012)
Sex offenders--especially those who victimize children--are pariahs in our society. If it were possible, I bet many folks would favor blasting them off into space rather than "punishing" or "rehabilitating" them. Any legislative proposal to restrict the "rights" of sex offenders--even those who have served their time or otherwise been rehabilitated (whatever that means)--faces a one-sided political economy. No one ever sticks up for sex offenders, so laws targeting them typically pass quickly and non-contentiously.
In 2006, Congress passed the Adam Walsh Child Protection and Safety Act of 2006, which it extended with the Keeping the Internet Devoid of Sexual Predators Act (KIDS Act of 2008). (The names of these laws reflect common legislative tricks to speed passage and suppress opposition). Collectively, these two laws require sex offenders to submit their online aliases into a database which social networking sites can voluntarily access and then block the aliases if they choose.
Apparently unsatisfied with Congress' efforts, Louisiana went a giant step further and prohibited certain sex offenders from accessing social networking sites, chat rooms or peer-to-peer networks. The punishment could be up to 10 years in prison with "hard labor" (this always reminds me of Cool Hand Luke). The law was defective on so many fronts, including:
* overly prophylatic. The law doesn't criminalize misuse of a website; it criminalizes visiting the website due to the possibility that it might be misused. This is ridiculously overinclusive. It's a bit like saying sex offenders can't drive cars because they might drive to victimize children. Further, I'm not aware of any no social science validating the benefits of such a broad prophylactic ban on Internet technologies. Instead, hindering Internet usage by sex offenders deprives them of an essential tool for reintegrating into normal society. It's like the laws that restrict sex offenders from living too close to schools; drawn too broadly, those laws ensure that the sex offenders have to move further away from their jobs, lie about their residence, or become homeless. At some point, eliminating the rights of sex offenders almost guarantees their further criminal behavior because they lack other meaningful choices.
* the law didn't have a scienter requirement. A sex offender violates the law simply by visiting one of the verboten websites, even if the Internet user didn't know that the website violated the law. For example, some websites provide a web interface to a BitTorrent P2P implementation, and the user may not even know. Or many websites have chat functions that are apparent only once you get there. The only sure way a sex offender could comply with the law is to avoid the Internet altogether.
* probation officers could grant permission to visit specific websites, but the law didn't specify any standards for granting that permission. Furthermore, some sex offenders didn't have probation officers (i.e., they had completed the probation), and the state law apparently directed those individuals to the federal court system--without providing funding or standards to the federal courts. You could read this opinion as the judge saying No Thank You!
While these drafting problems are serious, they are symptoms of an underlying root problem: social media exceptionalism. The statute's real mistake is trying to carve out social media from the rest of the Internet and subject it to special treatment. As I've noted before, social media exceptionalism is bad policy, and it's impossible as a matter of statutory drafting. See my 2009 article, The Third Wave of Internet Exceptionalism, and my 2007 summary, Social Networking Sites and the Law. Take a look at the specific statutory definitions here:
“Chat room” means any Internet website through which users have the ability to communicate via text and which allows messages to be visible to all other users or to a designated segment of all other users.
Huh? That describes every message board and tools for users to comment on blog posts and news articles. How about this definition:
“Social networking website” means an Internet website that has any of the following capabilities:
(a) Allows users to create web pages or profiles about themselves that are available to the general public or to any other users.
(b) Offers a mechanism for communication among users, such as a forum, chat room, electronic mail, or instant messaging.
Huh? That sounds like every UGC website, even if we read the connector between (a) and (b) as "and" and not "or." Some examples: Wikipedia--yes. Boing Boing--yes. eBay--yes. Google--probably; it depends on how the statute contemplates a single company's differently branded but integrated services.
The KIDS Act also had a definition of "social networking website":
an Internet website-- (i) that allows users, through the creation of web pages or profiles or by other means, to provide information about themselves that is available to the public or to other users; and (ii) that offers a mechanism for communication with other users where such users are likely to include a substantial number of minors; and (iii) whose primary purpose is to facilitate online social interactions
This is better that Louisiana's definition, but not by much. Which of the sites I evaluated above aren't covered by this definition? The harm that flows from this definition is much less than Louisiana's criminalization with a decade in jail plus hard labor--the KIDS Act definition simply defines who can access the database of sex offenders' online aliases for voluntary blocking purposes--but analytically it's no more precise.
Naturally, it's easy to pick apart the statutory language, but I can't offer alternative language to fix the definitional overinclusiveness problem because I don't think it's fixable. None of these definitions come close to describing only the thing they target and nothing else. In my increasingly frequent talks about social media law, I make the point that social media law and Internet law are largely co-extensive because social media cannot be linguistically differentiated from the Internet ecosystem. This case shows that the overlaps are not only linguistic, but possibly Constitutionally required.
The ruling doesn't require social media sites to allow sex offenders on their sites, and they can still use the Adam Walsh/KIDS Act database to block known sex offender aliases. (Of course sex offenders may not properly report all of their aliases, a general deficiency of the self-reporting database approach). Thus, striking down this law doesn't immediately open up all of the Internet to the sex offenders. Nevertheless, it does mean that they can use the Internet without inadvertently committing a crime.
RadioShack May Be Liable for Accessing Images from Recycled Customer Cellphone -- Steele v. RadioShack
[Post by Venkat Balasubramani]
Steele v. RadioShack Corp., 11-14021 (E.D. Mich.; Feb. 3, 2012)
Steele bought a new phone at RadioShack, after which a RadioShack employee transferred the data from Steele’s old phone to his new one. Steele also left his old phone at RadioShack for recycling. After Steele left, RadioShack accessed his old phone and viewed personal information, including photographs which Steele took at his worksite. RadioShack forwarded these photos to Steele’s employer. As a result, Steele was fired.
The parties' arguments are muddled, and the court expresses its displeasure at the “inaccurate, insufficient, and jumbled arguments from both sides.” Steele at least brought a claim for common law intrusion into seclusion, which required him to show (1) the existence of private and secret subject matter; (2) that the plaintiff had a right to keep private; and (3) access of the information by defendant through means objectionable to a reasonable person.
The court focuses on the second and third elements, finding that RadioShack did not raise the first element sufficiently in its initial moving papers. As to the second element, RadioShack appeared to argue that giving the phone to RadioShack for recycling somehow terminated Steele’s right to keep the information private, but the court rejects this argument:
[RadioShack’s argument] is illogical – it says that a customer has no right to keep personal information private once he allows RadioShack access to it during the course of business. If his court embraces this argument, then RadioShack would not have any liability for disclosing personal credit card information it obtained while processing a sale. Customers routinely give personal information in order to process transactions – information that they would expect to be disposed of and kept private, not distributed to whomever the store feels like giving it to.
RadioShack also argued that Steele fails to satisfy the third element (that the information was accessed in a way that was offensive to the reasonable person). The court rejects this argument as well, noting that a reasonable person who gave his or her cellular phone to someone with the understanding that the device would be destroyed or recycled does not consent to access of the personal information on the device. The court says that this is a question for the jury and not amenable to resolution at the motion to dismiss stage.
In contrast to the privacy tracking lawsuits, the plaintiff in this case alleges that his private information was actually disclosed to a third party and ended up causing him harm. The case brought to mind other cases where customer information was not properly disposed of: Pinero v. Jackson Hewitt and Putnam Bank v. Ikon Office Solutions. In both of those cases the claims failed for lack of out-of-pocket loss or even actual disclosure of the data to third parties. Here, the plaintiff alleged both of these things.
February 22, 2012
Courts Continue to Grapple with Discovery Disputes Around Social Networking Evidence
[Post by Venkat Balasubramani]
Tompkins v. Detroit Metro Airport, 10-10413 (E.D. Mich.; Jan. 18, 2012)
This is a slip and fall case where the plaintiff alleges that injuries she suffered at Detroit’s Metro airport affected her quality of life and ability to work. Defendant asked plaintiff to release her medical records and records from her Facebook account. She refused as to the Facebook account, arguing that the private portions of her account should not be turned over in discovery.
The court says (citing to McMillen v. Hummingbird and Romano v. Steelcase) that there’s no privilege as to information contained in social networking accounts. Access to this information by an opponent in litigation is governed by traditional discovery principles. The court notes that in both Romano and McMillen the plaintiffs made injury claims that were inconsistent with information contained in the public portions of their social networking accounts. The court says that while there is no privilege protecting private (or quasi-private) information in a social networking account, “the [d]efendant does not have a generalized right to rummage at will through information that [p]laintiff has limited from public view.” The court says there has to be a threshold showing that “the requested information is likely to lead to the discovery of admissible evidence.” [Translation: a standard argument in every personal injury case that the plaintiff must have posted pictures of herself frolicking on the beach will not fly.]
Here, defendant argued that the public postings and surveillance photographs satisfied this standard. The court says no. The picture of plaintiff holding a “very small dog and smiling” is not inconsistent with plaintiff’s claims of being injured. (“The dog in the photograph appears to weigh no more than five pounds and could be lifted with minimal effort.”) The surveillance photograph showing plaintiff pushing a grocery cart similarly is not inconsistent with plaintiff’s claim of being injured. The court rejects defendant’s attempt to access the private portion of plaintiff’s Facebook account:
If the Plaintiff’s public Facebook page contained pictures of her playing golf or riding horseback, Defendant might have a stronger argument for delving into the non-public section of her account. But based on what has been provided to this Court, Defendant has not made a sufficient predicate showing that the material it seeks is reasonably calculated to lead to the discovery of admissible evidence.
The court also says that the request for the entirety of the account will sweep in information that is in no way relevant to the case and is thus overly broad.
Davenport v. State Farm Mutual Auto Ins., 2012 U.S. Dist. LEXIS 20944 (M.D. Fla; Feb. 21, 2012)
Here, the insurance company defendant sent a request to plaintiff seeking all photographs posted to social networking sites, whether posted by plaintiff or by a third party. As in Tompkins, the court says there’s no special privilege that attaches to social networking content, but the rules of discovery limit an opponent’s ability to request this information.
Plaintiff proposed that she be required to produce only photographs taken by her that depict her. She says the photos she has been “tagged” in do not satisfy the Rule 26 relevance standard, but the court disagrees. The court says plaintiff has to produce all photographs which depict her, whether she posted them or she had been tagged in the picture. The court does limit this by saying the default discovery rules only require a party to produce information that is within the party’s “possession, custody, or control.” The court says this “likely” means that plaintiff will “need to produce only photographs that she posted or in which she was tagged.” The court does not offer any additional details on whether material posted to a social networking site is still within that party’s “possession, custody, or control.”
Separately, defendant had asked to inspect any devices used to post any material to social networking sites, but the court shoots this down.
Courts are really all over the place on issues relating to the discovery of information posted to social networks. The decisions grapple with (but none coherently address) the following issues:
• whether any of the communications are covered under the Stored Communications Act and how this affects discoverability;
• whether an opponent can obtain direct access a non-party or witnesses social networking site (several decisions have ordered password swaps, waivers, or in-camera reviews);
• whether the discovery request should be directed to the social network directly or to the party whose information is sought;
• what threshold showing is required form a party seeking discovery;
• whether information posted to a social networking site is within the control, possession or custody of the party who posted it (for purposes of Rule 26).
Courts appear perfectly willing to smack down discovery requests that overreach, but continue to struggle with finding a balance and dealing with the logistical issues inherent in these types of discovery disputes.
Court Orders Disclosure of Facebook and MySpace Passwords in Personal Injury Case -- McMillen v. Hummingbird Speedway
Deleted Facebook and MySpace Posts Are Discoverable--Romano v. Steelcase
Judge Offers to Facebook 'Friend' Witnesses in Order to Resolve Discovery Dispute -- Barnes v. CUS Nashville
Plaintiff Can't be Forced to Accept Defense Counsel's Facebook Friend Request in Personal Injury Case -- Piccolo v. Paterson
Court Orders Plaintiff to Turn Over Facebook and MySpace Passwords in Discovery Dispute -- Zimmerman v. Weis Markets, Inc.
Pennsylvania Court Orders Personal Injury Plaintiff to Turn Over Facebook Password to Defendant -- Largent v. Reed
February 21, 2012
Facebook Gets Decisive Win Against Pseudo-Competitor Power Ventures -- Facebook v. Power Ventures
[Post by Venkat Balasubramani, with comments from Eric]
Facebook, Inc. v. Power Ventures, Inc., et al., C 08-05780 JW (N.D. Cal.; Feb. 16, 2012)
The long-running dispute between Facebook and Power Ventures came to a close last week, with Judge Ware granting Facebook’s motion for summary judgment on Facebook's claims under CAN-SPAM, California Penal Code section 502, and the Computer Fraud and Abuse Act. The power.com domain name went up for auction in 2011 and it appears that the domain name was not owned by Power Ventures, the defendant in this lawsuit. The court deferred ruling on the liability of individual defendant Steve Vachani. [Update: see an update below regarding the ownership of the domain name and its relationship to this dispute.]
Facebook alleged that Power Ventures allowed Power.com users to access their Facebook profiles through Power.com’s interface, and also induced its users to send emails to other Facebook users telling them to try out Power.com. The specifics of how Power Ventures' conduct differed from other Facebook apps isn't entirely clear, although it is clear that Power Ventures did not participate in Facebook’s authorized developer program, and Facebook undertook some technical efforts to prevent the access of Facebook by Power Ventures and Power.com users. As with the enforcement efforts of many networks, Facebook’s approach here raises some questions as to how courts will view other similar efforts of people who are a part of the Facebook ecosystem. The big question Professor Goldman always raises--and I think is relevant here--is to what extent there may be blowback from this ruling to Facebook (or its partners) in other cases. The case also raised data portability issues and issues relating to the scope of California Penal Code section 502. Likely for this reason, EFF participated as an amicus.
Standing: The first question regarding Facebook’s CAN-SPAM claims was whether Facebook had standing to sue. Citing Gordon v. Virtumundo, the court says that Facebook has standing under CAN-SPAM to the extent it can show that it suffered harm that is of the type “uniquely encountered by” providers of internet access services. Virtumundo said end users don’t have standing under CAN-SPAM, and end users cannot manufacture standing by casting themselves as ISPs. The plaintiff in that case signed up for hosting services provided by third parties and did not suffer any particular “adverse effects” from the spam, other than the annoyance of having to delete it. Here the court says that the evidence produced by Facebook demonstrates that it suffered unique adverse effects as an ISP: (1) Power.com users sent approximately 60,000 emails, and (2) Facebook undertook specific efforts to stop these emails. (The evidence offered by Facebook seemed equivocal as to whether it was directed to stopping unwanted communications from Power.com end users or whether Facebook was concerned with restricting Power Ventures' access of Facebook's networks. Facebook's enforcement efforts spilled over into both categories, but the evidence seemed more suited to a Computer Fraud and Abuse Act claim than a CAN-SPAM claim.)
Did Power Ventuers ‘Initiate’ the Messages: CAN-SPAM defines "initiate" to include those who “originate or transit” a message, or “procure” its origination or transmission. Routine conveyance of a message is excluded from the definition of initiate. Facebook argued that Power Ventures initiated the messages because it ran a contest for Power.com users signing up their Facebook friends (if you signed up more than 100 users, Power Ventures would pay you $100). The court concludes that this inducement is sufficient to categorize Power Ventures as one of those who “initiated” the messages, even though end users selected which friends would be emailed, and Facebook’s servers filled in the header information when the user requested an email to be sent.
Were the Emails Misleading: The final question with respect to the CAN-SPAM claims were whether the messages were misleading in any way. Power Ventures understandably argued that the messages were sent through Facebook, came from a Facebookmail.com email address, and therefore the messages could not contain any misleading header information. Power Ventures also argued that text of the messages contained information about Power.com, and Power Ventures could not have changed the headers of the emails because it did not have any control over the headers. The court says all of this is irrelevant:
[the] emails did not contain any return address, or any address anywhere in the e-mail, that would allow a recipient to respond to [Power Ventures]. Thus, as the header information does not accurately identify the party that actually initiated the e-mail within the meaning of [CAN-SPAM], the Court finds that the header information is materially misleading as to who initiated the email.
Whoa. The court does not cite to Mummagraphics, where the 4th Circuit rejected the same basic argument. (See "Fourth Circuit Rejects Anti-Spam Lawsuit--Omega World Travel v. Mummagraphics.") Mummagraphics' key holding is that in order to be actionable, an email header must be materially misleading, and if there the recipient would reasonably know where the email was coming from then there should be no CAN-SPAM violation. Here the emails were sent through Facebook's platform by end users, so Power Ventures has an even better argument than the defendant in Mummagraphics that the header information was not misleading.
California Penal Code Section 502
We also need to do some planning to make sure we [access data from Orkut] in a way where we are not really detected. Possible rotating IP’s or something. Don’t really understand this too well. . . . . We need to plan this very carefully since we will have only one chance to do it.
[Ouch!] In granting summary judgment, the court says there is no reason “to distinguish between methods of circumvention built into a software system to render barriers ineffective and those which respond to barriers after they have been imposed.”
Computer Fraud and Abuse Act Claim
The court also grants summary judgment on the Computer Fraud and Abuse Act claim, finding that the access of Facebook’s servers by Power Ventures was “without authorization,” and Facebook satisfies the $5,000 damage threshold.
This case looked like it was teed up to highlight a data portability issue and the question of whether Facebook can keep third parties who don’t go through its authorized developer channels but who act at the request of end users out of its network. The court’s decision gives short shrift to both of those issues. There is probably not much precedent to the contrary (if any), but Power Ventures' access of “information” from Facebook’s servers was ostensibly done at the request of Facebook end users, and the information that Power Ventures extracted was the contact information (friend lists) of Facebook end users. Thus, Facebook's allegations regarding Power Ventures' actions shouldn't in theory come within the Computer Fraud and Abuse Act. True, there were some additional facts which made Power Ventures' arguments tougher from an optics standpoint, but the end result is that if users want to access data, they have to do so on Facebook’s terms, and may not do so using a third party tool that is not a part of Facebook’s developer platform. (To my knowledge, the Computer Fraud and Abuse Act as written does not look to whose data is accessed, so the statute allows the result achieved by Facebook in this case.)
The CAN-SPAM ruling is remarkable--and screwy--on a number of levels. Several courts have ruled that emails sent through networks (such as MySpace or Facebook) are covered by CAN-SPAM, but those decisions did not confront the practical issue of how an emailer can comply with CAN-SPAM with respect to emails that are sent by an end user via a network such as Facebook--i.e., where those who "initiate" a message cannot alter the content of the messages. (See "N.D. Cal.: Facebook Posts are Electronic Mail Messages, Subject to CAN-SPAM -- Facebook v. Maxbounty.") I wonder whether Facebook considered the practical aspects of this ruling: retailers who send messages through Facebook are not CAN-SPAM compliant! End users don’t have standing to sue, but retailers and companies who induce end users to send messages through their friends can be considered to "initiate" these messages, and under the court’s ruling, since the messages come from Facebook (via facebookmail.com) and do not contain the retailer's header information, these message are materially misleading under CAN-SPAM.
Update: I originally speculated whether Facebook would try to go after the power.com domain name or the proceeds of the auction. Via email, Scott Smith, the CEO of RokME Inc., who is brokering the sale of the power.com domain name, reminded me that the power.com domain name was leased to Power Ventures and therefore the domain name is not a part of this dispute:
Several years ago Power Assist Inc. the owner of Power.com leased the domain to Power Ventures Inc. During the course of the lease Power Ventures Inc. operated Power.com as a social network aggregation site and did some things that Facebook disagreed with. At that time Facebook sued Power Ventures Inc. and by association, Power.com was noted in the filings. That is the only connection.
The lease on the domain Power.com ended last February. Once the lease ended the owner was free of any further obligations and decided to sell the domain. My company - RokMe Inc. was hired to broker the sale. . . .
Since that time there has been no connection with Power Ventures Inc. or its owner Steve Vachani. It has taken this long for the case to wind its way through the courts and because of the earlier association, the domain Power.com was unfortunately caught up in the web of their legal wrangling.
Ugh. Bad facts make bad law, and this case has plenty of badness to go around. Power Ventures was a lousy poster child for a test case on data liberation. Yet, the court's results are troubling for everyone--including Facebook!--and I can only hope future courts recognize the opinion's goofiness when deciding whether to accord it any weight.
The CAN-SPAM ruling is the most troubling. Running through the elements tendentiously, the judge finds a technical violation of the CAN-SPAM elements, but this element-by-element review leads to a tone-deaf outcome overall. Stripping away the detail, users were using Facebook's messaging tools to talk with each other. Sure, Power Ventures was interested in that conversation and facilitated it in a number of ways, but calling Power Ventures a spammer because users talked to other users is baffling. It's a little like the misguided underpinnings of the FTC Endorsement and Testimonial Guidelines; this case similarly treats Power Ventures like an "advertiser" and thus makes it liable for how users talked to each other. Huh?
As Venkat points out regarding retailers, this ruling could set up other Facebook users for a similar fate if they get Facebook users to use Facebook's native tools to talk to each other. This could be counterproductive for Facebook's long-term interests if businesses (and others) start to fear that Facebook now has the discretion to sue them as a spammer whenever it wants.
Similarly counterproductive to Facebook's interests is the expansive interpretations of the CFAA and Penal Code 502. Facebook grabs a lot of content from third parties without permission--for example, every time a user posts a link, Facebook grabs and republishes snippets of the linked page without permission. Is that a CFAA/502 violation BY FACEBOOK? Facebook might have other defenses, but it seems to have negated any "we're just a proxy for the users" defense. Because I'm a cyberlaw purist, I hope Facebook doesn't get hoisted on its own petard; but if it ever does happen, it will be hard to suppress a slight schadenfreude smile.
Clearly, though, Facebook is signalling that it won't download email addresses from third party sources like Gmail without the third party's permission--like for its "find a friend" feature. After all, even if Facebook has the user's permission to access the user's own data, that's legally meaningless without the data source's permission as well. The net result is that data sources can erect fences around user data despite the user's wishes.
Indeed, the most tone-deaf aspect of the ruling is the anti-competitive backdrop to Facebook's enforcement action, which doesn't even get a nod from this opinion. Personally, I would not have trusted Power Ventures with my personal data, so losing them as a competitive option is no big deal to me. Facebook positions this case about user protection. Their formal statement: "We are pleased that the court ruled in our favor. We will continue to enforce our rights against bad actors who attempt to circumvent Facebook's privacy and security protections and spam people," said Craig Clark, Lead Litigation Counsel, Facebook. But I don't find it all that credible that Facebook was motivated solely by a desire to protect us as users from a dangerous Power Ventures (Indeed, I believe Power Ventures could have sucked down an immense amount of user data through Facebook's APIs with, at most, minimal oversight by Facebook). The other obvious possible motivation: Facebook didn't like Power Ventures competition, so it shut down Power Ventures' access to Facebook's users. With its massive leadership in its niche, it seems only a matter of time before antitrust regulators start sniffing around Facebook. Its enforcement action against Power Ventures probably won't spur that, but Facebook will have to tread cautiously with future blatant shutdowns of competitors.
February 19, 2012
Another Data Loss Case Tossed on Article III Grounds--Whitaker v. Health Net
[Post by Venkat Balasubramani]
Whitaker v. Health Net of California, Inc., Civ S-11-0910 KHM-DAD (E.D. Cal.; Jan. 19, 2012)
This is another data breach class action. Plaintiffs tried to squeeze their claims through a narrow opening left by Ninth Circuit precedent, but the court dismisses the claims for lack of standing.
IBM manages Health Net's information technology infrastructure. In January 2011, IBM informed Health Net that it lost 9 Health Net server drives, which contained the personal and health information of approximately 800,000 Health Net customers. Health Net sent a letter to the affected invidiauls in March 2011. The opinion does not mention whether Health Net offered credit monitoring or other preventive services. At the time the parties finished briefing the motion to dismiss, three of the nine servers had been recovered. The other six remained missing. The defendants both filed motions to dismiss.
The court focuses on whether plaintiffs sufficiently alleged “injury in fact.” Plaintiffs argued that they satisfied the standing requirements established by the Ninth Circuit in Krottner v. Starbuck and Ruiz v. Gap. (Here are blog posts on Krottner ("Starbucks Data Breach Plaintiffs Rebuffed by Ninth Circuit") and Ruiz ("9th Circuit Affirms Rejection of Data Breach Claims Against Gap").) The court distinguishes Krottner and Ruiz on the basis that, in both of those cases, the data breach occurred due to theft and not loss of the data. The court also highlights that the plaintiffs did not allege any actual harm, apart from the loss of data and the risk that the data would be misused. Although one of the plaintiffs received a letter informing them that the social security number of their minor child had been misused, the court says that this does not confer standing on plaintiffs, who have to satisfy standing on their own (unless they are asserting third party rights).
The court also relies on Low v. LinkedIn for the proposition that speculative allegations regarding disclosure or harm is not sufficient to support Article III standing. (See also Reilly v. Ceridian.)
End result: the court dismisses with leave to amend. The plaintiffs have thirty days to amend their complaint to allege sufficient harm.
It’s worth keeping in mind that although plaintiffs cited to Krottner and Ruiz, the plaintiffs in those cases did not prevail. Despite finding that the allegations sufficient from the perspective of Article III standing, plaintiffs lost on the merits in both cases. Plaintiffs have tried every possible combination of allegations (theft of information; misplacement of information; employment information; health information) but courts simply refuse to find a cognizable claim unless the plaintiff can allege that his or her data has been misused in a way that causes out-of-pocket losses. A few cases have pointed to credit monitoring services as recoverable mitigation, but where the defendant offers up this relief to consumers voluntarily, a plaintiff is pretty much out of luck.
It’s also interesting to note that this case involved claims under California statutes which provide for the confidentiality of medical records. Given that the court did not discuss statutory damages, I would assume the statutes in question did not provide for these damages. Even if they did, failure to satisfy Article III standing could still undermine the claims. (A case pending in front of the United States Supreme Court may answer this question. See "'Sleeper" Case Asks Whether Plaintiffs Can Sue Without An Injury.")
Starbucks Data Breach Plaintiffs Rebuffed by Ninth Circuit -- Krottner v. Starbucks
9th Circuit Affirms Rejection of Data Breach Claims Against Gap -- Ruiz v. Gap
LinkedIn Beats Referrer URL Privacy Class Action on Article III Standing Grounds--Low v. LinkedIn
Third Circuit Says Data Breach Plaintiffs Lack Standing Absent Misuse of Data -- Reilly v. Ceridian
February 14, 2012
Posting Family Photos to Facebook With Snarky Comments Isn't Harassment of Family Member -- Olson v. LaBrie
[Post by Venkat Balasubramani with comments from Eric]
Olson v. LaBrie, 2012 WL 426585 (Minn. App. Ct. Feb. 13, 2012)
This case is what happens when a headline from The Onion comes to life. Aaron Olson sought a harassment restraining order against his uncle Randall LaBrie. Olson argued that Labrie harassed Olson by...get this...posting “innocuous [but surely awkward] family photographs” to Facebook and making mean comments directed toward Olson. The photos included Olson as a child, “posing in front of a Christmas tree.” LaBrie also tagged Olson in the photos. When Olson became aware of the photos, he requested they be removed or “altered to erase” Olson. Labrie demurred, although he untagged Olson. Understandably, LaBrie told Olson that if he did not like the photos, “he should stay off Facebook.”
Olson was not “friends” (in the Facebook sense, or apparently, in any sense) with LaBrie, and accessed the photos via his mother’s Facebook account. The parties had a peripheral argument about how the photos were accessed. LaBrie said that the photos were meant for his inner circle, but Olson said they were accessibble to the general public. At the end of the day, it turns out to not matter. The court says that posting these types of photos to Facebook does not amount to harassment, and the comments offered by Olson as evidence were nothing nore than “mean, disrespectful comments,” which cannot form the basis for liability. The Minnesota anti-harassment statute is directed at:
repeated incidents of unwanted acts, words, or gestures” that have a substantial effect on the “safety, security, or privacy of another."
On appeal, Olson tried to argue that LaBrie conduct had a substantial effect on his privacy, but he did not raise that issue in the trial court and the appeals court says he waived it. Even assuming he had raised it, the court says that Minnesota law recognizes three types of common law privacy violations: intrusion, appropriation, and the publication of private facts. Minnesota law does not recognize “false light publicity.” Olson argued that one of these common law privacy violations could have supported issuance of the anti-harassment order, but the court says that the statute defines harassment, and there’s no need to look to case law for additional definitions.
Olson raised two other issues that are worth noting, and really makes me wonder whether this wasn’t some Onion editor’s attempt to generate a story. First he argued that the trial court erred in not crediting the testimony of his mother, who testified that Labrie’s conduct was offensive. Second, Olson tried to get the record sealed. Hello, Streisand Effect!
The only thing that would have kicked this opinion up a notch would have been a cite to awkwardfamilyphotos.com.
Private Facebook Group's Conversations Aren't Defamatory--Finkel v. Dauber
Revenge Blogger Ordered to Remove Blog--Johnson v. Arlotta (also from Minnesota--is there something in the water there?)
This case demonstrates that the family that Facebooks together doesn't necessarily stay together. I don't understand why Olson was so concerned about the posting of old "innocuous" family photos, although I can understand why Olson might object to "mean, disrespectful comments." At the same time, I also don't understand LaBrie's response that if Olson didn't like it, he should stay off Facebook; nor does it make sense that LaBrie said he didn't intend for Olson to see the photos because they weren't Facebook friends. It seems fair for someone to object to the publication of photos even on a service the person doesn't use or can't see the photos. Obviously there's a backstory to this family squabble that got washed out in the appellate opinion. I guess it goes to show that you can pick your Facebook friends but you can't pick your family. A protip of general applicability: never allow sharp objects at family reunions.
Talk Notes: Death of the Initial Interest Confusion Doctrine?
By Eric Goldman
As you may know, the IP professor community is blessed to have a number of "work-in-progress" events where we share our research-in-process with, and get early feedback from, our peers. Last weekend, I attended one of those events, WIPIP, at the University of Houston.
I presented a talk entitled "Death of the Initial Interest Confusion Doctrine?" My presentation slides. The talk traces its roots to my ad hoc observations, starting in 2010, that:
1) courts were citing the initial interest confusion doctrine with noticeably less frequency,
2) when they did reference the doctrine, many opinions just made a cursory non-substantive reference, and
3) plaintiffs were rarely getting any traction with their IIC arguments.
February 13, 2012
Employee Wins Harassment Claim Based in Part on Co-Workers' Offsite Blog Posts--Espinoza v. Orange
By Eric Goldman
Espinoza v. County of Orange, 2012 WL 420149 (Cal. App. Ct. February 9, 2012)
Espinoza was born with an incomplete hand. In 1996, he started working for the county probations department. In 2006, a co-worker started two independent blogs, including one called "Keeping the Peace." Pseudonymous commenters quickly used the blog to launch a cyberattack against Espinoza, with multiple co-workers criticizing and mocking Espinoza's hand, managerial style and other work-related issues. (Espinoza wasn't the only probation employee attacked on the blog). Espinoza also alleged numerous offline incidents of harassment at the workplace, and he repeatedly reported the situation to management. The local managers took some steps to remediate the online harassment, but it appears those steps weren't pursued zealously and weren't effective. Espinoza sued the county for disability harassment (among other things), and a jury awarded him over $800k.
The county appealed on several grounds, including:
* the blog posts were "conduct outside the workplace." In addition to the fact that harassing behavior took place onsite, the court says:
Employees accessed the blog on workplace computers as revealed by defendant's own investigation. The postings referred both directly and indirectly to plaintiff, who was specifically named in at least some of them, and the postings discussed work-related issues. It was reasonable for the jury to infer the derogatory blogs were made by coworkers. Management sent two e-mails to employees directing they discontinue posting the improper comments on the blog. This suggests the administrators believed employees were posting. That none of the individual defendants was found liable for harassment does not overcome the other evidence of employee harassment. And that some of the blog postings were directed against the probation department and its management does not somehow offset the comments made about plaintiff.
This raises the same issue as the cases dealing with schools disciplining students for online behavior. See, e.g., 1, 2, 3, 4, 5, 6. However, I'm not sure I understand the onsite/offsite line being drawn by this court, and it's navigating some tricky issues. Clearly employers can't be automatically liable for online activity between employees, and in particular, government employers can't restrict an employee's speech outside the office. For a discussion about this in the context of private employers, see Venkat's post "Private Employers and Employee Facebook Gaffes [Revisited]."
This case seems a little clearer-cut than that. As the opinion spins it, the employer had a pervasive problem with intra-employee harassments both in and outside the office, and the employer didn't try very hard to fix that pervasive problem. But notice two things: even if the employer had blocked blog log-ins at the office, it couldn't regulate the out-of-office conduct; plus, none of the individual harassers were actually found guilty of harassment, so it's not clear their blogging was "illegal" content. As a result, the employer could not have cracked down on the employees' out-of-office conduct without risking a suit from the targeted employees. I'm not exactly sure what the court wanted the county to do about the offsite blog, and it's too bad the court didn't expressly acknowledge the employer's obvious dilemma.
* blog evidence should have been suppressed. In particular, the county argued that blog posts unrelated to Espinoza should have been excluded because those posts were "vulgar and disgusting." The court disagreed because the corpus of posts was sufficient relevant and not unfairly prejudicial.
* 47 USC 230. The county claimed that 47 USC 230 preempts the workplace disability harassment claim. Although part of the harassment claim was based on blog activity and allowing employees to access the blog from the workplace, the court concludes that "defendant's breach was not based on its employees' use of their work computers but on its own failure to investigate and resolve the problem." The court later reminds us that the "plaintiff does not seek to hold defendant liable for the actual blog postings, either directly or vicariously."
The court discusses the quirky Delfino v. Agilent case, where a prior California appeals court held that 47 USC 230 immunized an employer for providing Internet access to an employee who cyber-threatened third parties. The court distinguishes the case because in Delfino:
the plaintiffs were strangers, never employees of the defendant, and did not sue under FEHA, which imposes additional duties on an employer to protect an employee.. [and] the defendants had not ratified his acts and had no respondeat superior liability
On the plus side, it's good to see that employment lawyers addressing 47 USC 230. On the minus side, 47 USC 230 wasn't designed to address employer-employee lawsuits, so it will often be a stretch in those cases.
UPDATE: Molly DiBianca of the Delaware Employment Blog emailed me to explain that, in some cases, employers can (and perhaps must) discipline or terminate employees for off-duty conduct. This blog post provides some support for that claim.
February 06, 2012
Roommates.com Isn't Dealing in Illegal Content, Even Though the Ninth Circuit Denied Section 230 Immunity Because It Was
By Eric Goldman
Fair Housing Council of San Fernando Valley v. Roommate.com, LLC, 2012 WL 310849 (9th Cir. February 2, 2012)
A brief history of this long-running case. Fair housing advocates sued Roommates.com for allowing potential roommates to evaluate each other using allegedly discriminatory criteria in violation of the Fair Housing Act (FHA) and related state claims. In 2004, the district court dismissed Roommates.com based on 47 USC 230. In 2007, the Ninth Circuit reversed the district court in a horribly fractured batch of opinions led by Judge Kozinski. The Ninth Circuit wisely vacated those opinions and heard the case en banc. In 2008, the Ninth Circuit en banc majority, in an opinion written by Judge Kozinski, subsequently reinforced that 47 USC 230 didn't apply to parts of Roommates.com's service. The Ninth Circuit en banc majority opinion became the flagship exception to 47 USC 230, but that exception has proven narrow over the past four years; most cases citing Roommates.com rule for the defense.
After the Ninth Circuit en banc ruling, the case remanded to the district court to evaluate the substantive merits of the FHA and related claims (now that the Section 230 immunity was off-the-table). Although the Ninth Circuit en banc majority opinion didn't conclude that Roommates.com acted illegally, the opinion assumed Roommates.com's illegality so strongly that, not surprisingly, the district court ruled that Roommates.com violated the FHA and related claims.
The FHA ruling went back to the Ninth Circuit. Last week, the Ninth Circuit ruled--in yet another opinion by Judge Kozinski--decisively that Roommates.com hadn't acted illegally, i.e., that it hadn't violated the Fair Housing Act (or California equivalent) because roommates who share a dwelling aren't covered by the statutes. From a cyberlaw standpoint, the ruling is only mildly interesting.
Much more interesting is this ruling's implication for 47 USC 230 and the Ninth Circuit's prior en banc ruling. In his en banc majority opinion, Judge Kozinski offered the following conclusion, which is the most commonly cited holding of this case:
If you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.
Well, Judge Kozinski's latest ruling concluded that Roommates.com wasn't dealing in illegal content, so it should be immune, right? But Judge Kozinski earlier concluded that Roommates.com didn't qualify for the immunity because it had been dealing with illegal content. What gives?
It appears that Judge McKeown, in her en banc dissent, predicted this trap:
the question of discrimination has not yet been litigated. In dissenting, I do not condone housing discrimination or endorse unlawful discriminatory roommate selection practices; I simply underscore that the merits of the FHA claim are not before us. However, one would not divine this posture from the majority’s opinion, which is infused with condemnation of Roommate’s users’ practices. To mix and match, as does the majority, the alleged unlawfulness of the information with the question of webhost immunity is to rewrite the statute.
Indeed, one way of interpreting the Ninth Circuit's sequence of rulings is that, per the en banc ruling, a plaintiff can defeat a 47 USC 230 immunity defense simply by alleging the existence of illegal content (as part of showing the website encouraged/required illegal content), and this allegation works even if the content ultimately isn't illegal. But this would be a bad policy result--we need the immunity exactly when the plaintiff's allegation is wrong. We now know Roommates.com deserved to win (either due to the immunity or based on the substantive doctrine), but the immunity would have gotten us to the right result much faster. After all, Roommates got its 47 USC 230 dismissal in the district court EIGHT YEARS AGO. Now, 8 years later, we've reached the same result, but the parties have spent enormous amounts of time and money to restore that status quo. As both Judge Kozinski and Judge McKeown acknowledge, the point of the 47 USC 230 immunity is to help defendants save those costs for the defense. By letting the plaintiff's incorrect allegation trump the immunity, the Roommates.com majority rule has undermined that objective.
[Procedural note #1: it is tempting to criticize Roommates.com's counsel for pushing the 47 USC 230 immunity ahead of other defenses, but that's not fair. Putting aside the fact that Roommates.com did advance multiple defenses initially and not just 230, Section 230 should eliminate the defendant's need to go through a claim's substantive elements (and all of the discovery associated with it). So it's a logical litigation strategy to put the Section 230 immunity first. And in fact, Roommates.com got the Section 230 win at the district court, so until the Ninth Circuit coughed up its hairballs, the defense strategy worked well.]
[Procedural note #2: it's a little harder to be sympathetic to Judge Kozinski. In his defense, as an appellate judge, he deals with the cases as they arrive on his desk. [UPDATE: In the first version of this post, I mistakenly claimed the case was initially dismissed on a motion to dismiss.] However, his en banc opinion was written quite broadly and loosely. If he had any doubts about the legality of Roommates.com's actions--and the new opinion makes it clear he's strongly in support of their actions--he could have acknowledged that possibility more clearly rather than writing such a strongly worded opinion based on the presumptive illegality.]
A different way of reading this result is that the latest Ninth Circuit ruling has undermined the en banc ruling. Roommates.com never had illegal content in the first place, so the en banc opinion was based on a factual predicate that wasn't true. I've asked Roommates.com's counsel about the possibility of asking the Ninth Circuit to vacate the en banc ruling because of this factual predicate problem. I don't know if such subsequent proceedings are possible, but it would be a big win for 47 USC 230 jurisprudence for the Ninth Circuit to wipe away the en banc opinions. Even though the en banc opinions have produced mostly defense-favorable rulings, wiping them out would clean up some unnecessarily loose and confusing language in the majority opinion as well as cast significant doubt on the few plaintiff-favorable cases that have built on Roommates.com (e.g., Accusearch, NPS, Swift v. Zynga, Jones v. thedirty).
The case library:
* February 2012 Ninth Circuit ruling
* Roommates.com's reply brief on the second appeal
* Roommates.com's opening brief on the second appeal
* District court ruling on remand. November 2008 stipulation. Blog post on those developments.
* 9th Circuit en banc opinion from April 2008
* Recording of the en banc oral argument
* Amicus brief from a variety of Internet companies such as Google, eBay and Amazon plus non-profit organizations such as the EFF [subsequently rejected by the Ninth Circuit]
* Amicus brief from various news organizations
* Amicus brief from the ACLU. Roommates.com's reply brief to the ACLU brief.
* The Fair Housing Councils' request to brief Batzel. Roommates.com's opposition. The Ninth Circuit denied the Councils' request on Nov. 6.
* The Ninth Circuit order granting the en banc hearing
* Fair Housing Councils' reply to the EFF et al amicus brief
* EFF et al amicus brief supporting a rehearing en banc
* Fair Housing Council's response to Roommates.com's request for an en banc rehearing
* Roommates.com's En Banc Request
* The original 2007 Ninth Circuit opinion
* My blog post on the Ninth Circuit opinion
* Blog post on initial district court dismissal per 47 USC 230
February 03, 2012
Are You Kinning Me? Microsoft Beats Trademark Lawsuit Over Kinect--Kinbook v. Microsoft
By Eric Goldman
Kinbook LLC v. Microsoft Corp., 2012 U.S. Dist. LEXIS 8570 (E.D. Pa. Jan. 25, 2012)
Microsoft makes the Kinect motion controller for Xbox, and for a while tried out a mobile phone named Kin. Kinbook makes a Facebook app (is this it?) that is intended to capture and organize family memories. Kinbook discovered Facebook's overzealous position that it owns the -book suffix, so Kinbook changed its product name to Kinbox. It alleged that Microsoft's branding of Kinect for the Xbox infringed the Kinbook/Kinbox trademark.
It's hard to tell how successful the Kinbook app is. Microsoft says it had 14 active users in May 2011. Kinbook claims closer to 17,000. Either way, Kinbook is hardly setting the world on fire. The court explains:
"Kinbook credits the arrival of the Kinect for XBOX 360 and Microsoft's accompanying marketing blitz with the poor start of its "Kinbox" Facebook application"
Stop right there. How could that be true? Assuming for a moment that "Kinbox" and "Kinect for the Xbox" are so overlapping that they could confuse consumers (a proposition I don't believe), wouldn't Microsoft's massive marketing blitz increase interest in Kinbox's offerings? So this should have produced a tidal wave of folks looking for Kinbox. Even if some of those users suffer disappointed expectations (they came because they wanted something other than what Kinbook provided), those users will turn over but won't affect the organic interest in Kinbook. Microsoft's promotion could only help Kinbook. Passing the blame to Microsoft isn't very credible.
Instead, the court finds the following:
* Kinbook has never generated any revenues
* they intended to build a website and mobile app but never did
* they intended to spend a quarter-million dollars on marketing but have only invested "a few thousand" dollars instead. Indeed, "Kinbook acknowledges that it has not dedicated any significant time, money, or effort to advertise, promote, or market its marks or services."
It sounds like any alleged trademark troubles with Microsoft are just the tip of the iceberg. Instead of fixing those core issues with their business, they invested their valuable resources in court proceedings.
The court reaches the entirely sensible conclusion that there's no likelihood of consumer confusion and tosses the claims. Among other reasons, the court points out multitudinous other users of the "kin" prefix:
"Kincafe," an online social network for families to connect; "Kin Valley," a secure online social network for the family; "Kinzin," an online social publishing service to allow groups to privately share photos; "Kinnect.Us," an online social networking service to stay connected with family and friends; "Kinector," an online service to help users stay connected with relatives through a private web site where family can share information; "Connect 2 Kin," an online service for families to stay in touch and share photos, share documents, schedule events, etc.; "Kindle," an e-book reader with social networking capabilities; and many others.
The plaintiff admitted that none of these other examples were confusing. Yet, somehow Kinect for the Xbox was. Hmm.
Kinbook also tried to argue that Xbox appeals to 5 year olds, so they should be the paradigmatic "consumer" whose confusion is measured. The court mocks this argument:
No matter what else the ever-remarkable current-day precocious 5 year-old can accomplish, this Court cannot fathom a 5 year-old with either the faculties or the financial means to independently purchase a retail item costing hundreds of dollars. Second, even the hypothetical precocious 5 year-old dispatched by indulgent parents (or grandparents) to make her or his own selections of amusement would likely be able to distinguish between a free software application, and a $150 piece of gaming hardware.
This lawsuit has all the indicia of a small trademark owner trying to squeeze a big company for a nuisance settlement. After all, Microsoft spent $100M promoting Kinect; if Kinbook could get only a 5% taste of the action, that would still be quite tasty. This ruling reminded me a little of the recent Fancaster ruling, which also involved a trademark plaintiff who hadn't really invested much in building a business before running to court. In the Fancaster case, there was some evidence that Comcast may have muscled into the plaintiff's sphere knowing the potential pitfalls, but there's no hint of that on Microsoft's part here (the case indicates that Kinbook didn't show up in Microsoft's trademark search). Instead, I'm just left with the suspicion that the plaintiff thought that a low-merit trademark lawsuit would be a faster path to revenues than building a business. If that's your idea of entrepreneurship, as a LOLcat might say, ur doin it wrong.
February 02, 2012
Comments on Twitter's Country-by-Country Tweet Removal Announcement
By Venkat Balasubramani, with comments from Eric.
Twitter recently announced its decision to censor tweets on a country-by-country basis. People were up in arms and planned a #twitterblackout. It was a big story last week. (Needless to say, I didn't participate in the blackout.)
As an initial note, Twitter's decision is entirely defensible, and I thought Twitter (and its General Counsel Alex Macgillivray) handled it with poise. I also don't know that its decision can easily be placed in the 'censorship' category since it's implemented by a private entity, which has tremendous discretion in blocking content. (Some of this depends on the actual policy, which we don't know the contours of.) Anyway, this is neither here nor there.
What was striking about this story was how it played out in the media--in particular, the muddled nature of the media narrative that followed this story.
What Types of Takedown Requests Will Twitter Honor?: I would have thought the key question here would be the contours of Twitter's policy--did it remove content in response to a court order? An administrative request? A takedown from a private party? Did it matter whether the request was premised on IP infringements? (no) Could it make certain topics totally off-limits in response to a government request? Would it block accounts? (yes) Hashtags? Would it make Twitter totally unavailable in a country? Here's a blurb from a NYT article titled "Censoring of Tweets Sets Off #Outrage" (italics added):
Twitter, like other Internet companies, has always had to remove content that is illegal in one country or another, whether it is a copyright violation, child pornography or something else. What is different about Twitter’s announcement is that it plans to redact messages only in those countries where they are illegal, and only if the authorities there make a valid request.
Huh? What's a "valid request"? An Associated Press story ("Twitter's new censorship plan rouses global furor") was similarly vague about what types of takedown requests Twitter would respond to:
Twitter said it has no plans to remove tweets unless it receives a request from government officials, companies or another outside party that believes the message is illegal. No message will be removed until an internal review determines there is a legal problem, according to Macgilliviray.
There's a big distinction between a takedown notice from a government, one from an individual (including one sent under a takedown regime such as the DMCA) or a corporation. Another story from the Times of India adds some detail and hints at this specific question ("Twitter's censor move with eye on China?"):
some experts wonder if Twitter's position was really different from that of Google or Facebook. "Google and Facebook have said that they would remove content if ordered by the courts, and Twitter too is saying that it can block tweets if required by the law," said an expert. "Where laws are codified, as in Germany and France about pro-Nazi propaganda, Twitter can block pro-Nazi tweets proactively. But in countries like India, where the laws are not that specific, this will be done reactively on the basis of court orders. That's all Twitter is saying."
(??) It's strange that the stories all described the key standards for what type of request will trigger a takedown in totally vague terms. Obviously it wouldn't make sense for the stories to describe in painful detail the innumerable types of requests an entity such as Twitter receives and how it deals with each of these types of requests, but it was clear after reading these stories that the media didn't have a firm grasp on the contours of Twitter's 'policy'. This was somewhat strange because this was the crux of the story, right? There's one larger aspect of the story which was clear which is that Twitter decided that whatever its policy is regarding takedowns, its response can be limited by country or region--i.e., if one particular country or region decides to send a takedown this may not affect all Twitter users. (The content will be available elsewhere. Also, as others quickly pointed out, as a user who is trying to access content on Twitter, there are probably ways to get around Twitter's country-specific block of content.)
Not surprisingly, many press reports cited to EFF's statement regarding Twitter's policy but even EFF's statement was fairly vague on the particular point of what takedown requests Twitter will honor ("What Does Twitter’s Country-by-Country Takedown System Mean for Freedom of Expression?"):
Twitter already takes down some tweets and has done so for years. All of the other commercial platforms that we're aware of remove content, at a minimum, in response to valid court orders. Twitter removes some tweets because they are deemed to be abuse or spam, while others are removed in compliance with court orders or DMCA notifications. Until now, when Twitter has taken down content, it has had to do so globally. So for example, if Twitter had received a court order to take down a tweet that is defamatory to Ataturk--which is illegal under Turkish law--the only way it could comply would be to take it down for everybody. Now Twitter has the capability to take down the tweet for people with IP addresses that indicate that they are in Turkey and leave it up everywhere else. Right now, we can expect Twitter to comply with court orders from countries where they have offices and employees, a list that includes the United Kingdom, Ireland, Japan, and soon Germany.
From what I gather, Twitter's blocking policy will be implemented on a case-by-case basis and it didn't announce any sort of policy for what types of takedown requests Twitter will automatically honor. But to me this is a key point that none of the stories really dug into.
Will Twitter Implement its Policy Only Where it has People and Offices?: This is another question that I was curious about. Will Twitter honor requests from countries where it doesn't have offices or does this work on a case-by-case basis also? If Twitter's assets, offices, or people are at stake then this obviously changes the calculus, but what about far-flung jurisdictions where Twitter has no presence and no expected or future relationships? EFF's post also hints at this but doesn't really offer specifics:
Twitter's increasing need to remove content comes as a byproduct of its growth into new countries, with different laws that they must follow or risk that their local employees will be arrested or held in contempt, or similar sanctions. By opening offices and moving employees into other countries, Twitter increases the risks to its commitment to freedom of expression. Like all companies (and all people) Twitter is bound by the laws of the countries in which it operates, which results both in more laws to comply with and also laws that inevitably contradict one another. Twitter could have reduced its need to be the instrument of government censorship by keeping its assets and personnel within the borders of the United States, where legal protections exist like CDA 230 and the DMCA safe harbors (which do require takedowns but also give a path, albeit a lousy one, for republication).
For what it's worth, the tradeoff between keeping a local presence and complying with a foreign court order is not anything new. Google has dealt with it, among other countries in Italy. (For all I know @amac could have been one of the lawyers who dealt with this while at Google.) Yahoo! dealt with it in France when it was ordered to take down Nazi memorabilia. In evaluating Twitter's policy, I would guess what people would want to know most (apart from what types of takedowns Twitter intends to honor) would be what types of jurisdictions Twitter intends on screening content in.
Maybe Twitter's decision isn't really a policy decision to screen content at the request of governments or entities but to make available the capability to screen content by geographic regions. There's a fundamental difference between the two. I certainly got a clear sense that there was a policy change afoot from the stories announcing Twitter's decision. Either way, none of the stories bothered to get into the details on what I thought were the two core issues. The other tangentially related issue that did not get much attention is how Twitter would respond to requests for user information from governments. We're not much wiser in terms of Twitter policy than we were when we started. On the one hand, this is somewhat strange, given that most reporters live and breathe Twitter, regardless of whether this is their reporting beat. On the other hand, maybe it's an example of how social media can infect journalism? Reporters are friendly with Twitter (as an entity, or a product) so maybe they were reluctant to ask the hard questions? Maybe everyone was in a rush to get their stories out so they didn't dig deep?
I think we'll have to wait and see to see how the policy actually plays out, but Twitter's actions demonstrate a commitment to free speech and openness so it should have gotten the benefit of the doubt. For whatever reason, the story just spiraled and took on a life of its own.
[For my money, one of the best stories on this was from Al Jazeera, which raises the fundamental question of what Twitter's policy is exactly: "Making sense of Twitter's censorship."]
Other posts worth checking out:
* Twitter's initial blog post which wasn't crystal clear on the issue: "Tweets still must flow."
* Lauren Weinstein: "Twitter's censorship muddle."
* Inforrm's Blog: "Legal questions about Twitter ‘censorship’ and country-specific content control – Judith Townend"
To see if we could get our own answers to these unresolved points, Venkat and I took our questions to Alex Macgillivray, and he generously responded to us. Our exchange:
Our Q: What types of takedown notices will be sufficient to get Twitter to take down a post? Court order? Government demand without a court order? Private demand without a court order? (I believe 512(c)(3) takedown notices already work). Others? Does it vary by country?
Alex's answer: "We do analysis of each complaint. For example, even a 512(c)(3) request does not necessarily lead to a removal."
Comment: I infer this means takedown demands are evaluated on a case-by-case basis. If Twitter does not have hard-and-fast rules about takedown demands that clearly work or clearly don't, that would explain why other media outlets weren't precise on this point.
Our Q: Will Twitter take down posts only in countries where it has a physical presence, or will it remove content from countries even where it doesn't have a physical presence?
Alex's answer: "Again, would depend on the requests. For example, a child pornography complaint, even from a user in China, might result in a global removal even though we are not responding in general to requests from China
and are still blocked there."
Comment: child porn is an extreme "test case" because of its toxicity, so I'm still not clear on what happens with less toxic content. I similarly infer everything is done on a case-by-case basis, which would also explain the muddled media coverage.
It appears Twitter thought its announcement was good news. Instead of having to remove a tweet from its database entirely, Twitter will now remove tweets only from one country's database. This leaves the tweet up for the rest of the world, and it makes it trivially easy for people in the affected country to get the tweet if they care. Furthermore, the tweet won't vanish; instead, it will be a "noisy withdrawal" by leaving a note that says the tweet was removed. Plus, Twitter will turn over the takedown demand to ChillingEffects, allowing interested folks to monitor the activity and find out what happened. In a world filled with irrepressible censorious impulses, Twitter's policies were designed to make the best of a bad situation.
So how did the messaging, and the community response, go so far wrong? Twitter ran into a small but vocal minority that believe that catering to foreign governments' censorious requests is wrong. I discussed this issue in some detail in connection with Google and China. As I wrote in connection with that situation:
what should a US service provider do when trying to expand internationally? It has a few options, none of them particularly attractive:
* It can skip unreasonably censorious markets altogether, like Google proposes to do in China.
* It can comply with local laws, even though that runs counter to US laws and norms.
* It can ignore local laws, which is typically not a successful plan. In extreme cases, it can lead to local company executives going to jail.
* It can try to change the local country’s laws to be more like ours, either through direct advocacy or by asking the US government to pressure the local government. We routinely use trade negotiations to do this; for example, we have successfully exported our copyright laws this way. But countries usually aren’t thrilled to have the US tell them what their laws should be.
Undoubtedly, the purists would prefer it if Twitter just stayed in the US "bubble" and engage in regulatory imperialism by getting foreign governments to see and do things "our way." A lot of self-satisfied hubris underlies that stance; something we saw with Google's situation where many people appeared to think that denying the Chinese people access to Google would bring the Chinese government to its knees (it hasn't yet). Ironically, American regulators have been on a censorious rampage recently (see, e.g., SOPA, Wikileaks, Operation in our Sites, etc., etc., etc.), so we're hardly on any moral high ground.
The reality is that iif Twitter chooses to expand globally, of course it will have to comply with local law, and of course other countries will require Twitter to take down posts. Twitter has built a technical architecture to reduce the collateral damage of such censorial demands. And I, for one, believe that American dot-coms do more to spread free speech by supplying the technology, even if hobbled through censorship, on an international basis than by not offering the technology at all.
In the end, though, Twitter's move--combined with similar moves, like Google's redirection of Blogspot on a country-by-country basis--remind us that geographic borders remain incredibly relevant to the Internet. This is a political reality, not a technical imperative. Technologically, the Internet is a borderless electronic network, but we continue to erect artificial geographic borders anyway. (See my post, Geolocation and A Bordered Cyberspace). Once again, with censorious proposals like SOPA/PIPA (and, for that matter, OPEN) that seek to create a Fortress USA, America is teaching the world how to embrace artificial geographic borders rather than teaching the world how to tear them down.
One thing I don't understand: if Twitter can turn tweets on and off by country, will that mean countries can assert jurisdiction over it even if Twitter doesn't have a physical presence there? Recall how these issues played out in the LICRA v. Yahoo case, where Yahoo's ad geo-targeting was held against it. Because Twitter can customize views of its databases on a country-by-country basis, foreign governments have a good argument that Twitter can "control" what content goes into a country. Recall, for example, the kerfuffle about Britain's "super-injunction" against Twitter. Even if Twitter didn't have a Britain presence, could Britain now have more leverage to force Twitter to honor its super-injunction? Or, could a foreign country assert that Twitter needs to comply with its data privacy laws on the theory that Twitter could simply turn off tweets in that country if it doesn't want to comply? I'm not sure how Twitter will now explain why it's chosen not to comply with a foreign country's laws, irrespective of its physical presence.
I also asked Alex about this issue:
Our Q: How do you think this policy will affect Twitter's compliance with laws in countries where it doesn't have a physical presence? In other words, because Twitter could simply choose to remove all tweets from showing in a country, Twitter might have a more difficult time arguing that it had no choice about whether or not to show tweets in the country. So, for example, a country may assert that Twitter should comply with its data privacy laws for users in its country even though Twitter has no physical presence there.
Alex's response: "This doesn't change our philosophy with respect to freedom of expression and I don't think it changes the pressure we'll get from countries (other than the transparency piece). Companies that have no way of doing local withholding still get plenty of pressure to do removals. Generally 'I don't have a way to just do this for your country' is a positive from the requesting country's perspective, not a negative or a viable excuse."
Comment: I'm glad I don't have Alex's job. It sounds like Twitter gets a lot of heat from government officials who aren't used to having people say no to them.
Some other discussions about this matter that I found interesting:
* Zeynep Tufekci, Why Twitter’s new policy is helpful for free-speech advocates
* Wired's coverage. Check out Cindy Cohn's quotes.
* WSJ's coverage of Alex Macgillivray's comments
* Blogger.com's New Takedown Policy Thwarts Censorship
February 01, 2012
Vendor Fails to Form Either an Online or Paper Contract With Customers--Kwan v. Clearwire
[Post by Venkat Balasubramani]
Kwan v. Clearwire Corp., C09-1392JLR (W.D.Wash.; Jan. 3, 2012)
Kwan brought a lawsuit aginst Clearwire and collection agents for Clearwire alleging that she was harassed by Clearwire and its debt collectors in an effort to reach a Clearwire customer (which wasn’t her). Among other claims, she asserted claims under the TCPA, the Fair Debt Collections Practices Act, and Washington’s Consumer Protection Statute. Her complaint was amended to add Brown and Reasonover, both of whom tried Clearwire services for a short time (or judging by the complaint, attempted to try Clearwire out for a trial period but with little success). The Clearwire terms contained a class action waiver and an arbitration clause, and Clearwire sought to have the dispute arbitrated pursuant to the Clearwire terms.
Brown signed up for a 14 day trial of Clearwire. She received a confirmation email from Clearwire one week prior to receiving her modem. She tried to connect her modem but was unsuccessful, and she alleged that she was not required to click any sort of acknowledgement before trying to connect her modem. She called Clearwire to cancel her service, but was persuaded by Clearwire to renew her trial period. Clearwire had a service technician check on Brown’s modem. The technician arrived while Brown was at work and Brown’s roommate left the technician alone to try to get the modem working. After the technician left, she tried to get the modem working, and it still would not work properly. According to Brown, she discovered that “use of her microwave interfered with her modem signal.” She tried to cancel her service, and after going back and forth with Clewrwire, Clearwire finally agreed that she could cancel her service. Clearwire sent her shipping labels to return the modem, but according to her, by the time she received the shipping labels from Clearwire, the labels had “expired.” This prompted another round of back-and-forth with Clearwire's customer service. Ultimately, she was able to return the modem to Clewarwire.
Reasonover’s experience with Clearwire wasn’t much better. She signed up for a seven day trial period and, because she was not at home when Clearwire shipped the package, Federal Express held it for her. She was unable to pick up the modem within her trial period, so she was worried about being able to cancel. When she plugged in the mdoem, she was only able to obtain “one green bar,” and this too from an “inconvenient location in her house.” Before connecting to the internet, she was presented with an “I accept” screen for Clearwire’s terms, but she bailed. Apparently, Clearwire told her that she could not cancel her serivce. She had some less-than-friendly exchanges with Clearwire, and she reported Clearwire’s actions to her credit card. Ultimately she alleges she paid for the modem (Clearwire disputes this).
The court notes that the Federal Arbitration Act provides for arbitration of disputes that are subject to arbitration clauses. While the FAA sets forth a policy in favor of arbitration, it first requires a determination of whether the parties entered into an agreement to arbitrate their dispute. And that’s the hitch for Clearwire.
The court canvasses the law on browsewrap and clickwrap agreements (citing to Specht, Register v. Verio, Hines v. Overstock, and Southwest Airlines v. Boardfirst, among other cases). While Washington courts have not upheld the enforceability of clickwrap or browsewrap agreements, the court notes that shrinkwrap agreements are enforceable under Washington law. The prevailing case in Washington relied on Hill v. Gateway and ProCD v. Zeidenberg, and in both of these cases, “the terms and conditions at issue were included with the product purchased by the consumer.” This is consistent with the court’s inquiry in Specht as to whether the customer had notice of the contractual terms.
The court holds that, on the record before it, Clearwire is not entitled to enforce its arbitration clause. Clearwire pointed to the email confirmation which it sent to customers, but the court notes that the confirmation email did not contain a direct link to Clearwire’s terms—the link pointed to Clearwire’s home page, and Brown would have to “negotiate her way through two more hyperlinks” in order to arrive at Clearwire terms. Clearwire also argued that Brown was aware of the terms and used the product in question. With respect to this argument, the court says:
The breadcrumbs left by Clearwire to lead Ms. Brown to its TOS did not constitute sufficient or reasonably conspicuous notice of the TOS.
In any event, the court notes that Brown returned the modem.
Clearwire fared no better against Reasonover’s claims. It could not rely on the terms on its website because Reasonover testified that she “abandoned the page.” It also could not rely on the confirmation email which it sent because the email did not contain a readily accessible link to the terms—as with the facts with respect to Brown, Reasonover would have had to click through a couple of different links to arrive at Clearwire’s terms. Finally, Clearwire relied on the material that it had sent with the modem. These materials unfortunately suffered from the same flaws:
At the bottom of one of the pages included in the modem packaging was a reference to the TOS and to where the TOS could be located on [Clearwire’s] website. The statement actually contain[ed] two different hyperlinks. Neither link . . . immediately display[ed] the TOS.
As a final bonus, the court also denied the request to arbitrate filed by the collection agency, finding that there was a dispute as to whether it was an agent (with a close relationship to Clearwire) that could enforce the terms, or an arms-length independent contractor, who would not. (citing Swift v. Zynga)
Apart from the numerous alleged customer service debacles detailed in the complaint, Clearwire dropped the ball in several ways.
First, it did not have a ‘leakproof’ clickwrap agreement that users had to agree to before they activated the modem or signed on. Clearwire could have forced its users to scroll through and click on an “I agree” button as a prerequisite to activating the modem. (This may not have helped with respect to Brown’s claims since a technician activated the modem, but I assume this was an aberration. Most users probabaly plugged in the modem and signed on without the help of a technician—Clearwire could have forced them to click through and agree to terms.)
Second, to the extent it tried to rely on paper terms which it sent to customers along with the modems, it could have at least included the terms themselves as part of the package. I get the feeling the judge in this case would have worked hard to find a way around enforcing the terms in this scenario, but it would have been harder. (Again, the fact that the customers returned the modems would have affected the analysis. They could have argued that their acceptance of the terms was premised on them keeping the modems and since they didn’t they should not be bound by the terms.)
Finally, there’s the email debacle. I’m not sure the email would have helped since it came after the fact and would be categorized in the same manner as paper terms (i.e., if the customer returns the item, they can argue they should not be bound by the terms). But the email did not even include the terms!
This decision is largely consistent with previous online contracting cases. If you can't easily show the court that the terms were readily accessible, you're going to have a long road to travel down. It also demonstrates that if you can make a compelling case to the court that there's something inequitable afoot (whether in the form of seriously egregious, one-sided terms, or botched customer service, as was alleged in this case) courts will work to find a way around enforcing terms that they may otherwise enforce.